991 resultados para Cryptography, Discrete Logarithm, Extension Fields, Karatsuba Multiplication, Normal Basis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lognormal distribution has abundant applications in various fields. In literature, most inferences on the two parameters of the lognormal distribution are based on Type-I censored sample data. However, exact measurements are not always attainable especially when the observation is below or above the detection limits, and only the numbers of measurements falling into predetermined intervals can be recorded instead. This is the so-called grouped data. In this paper, we will show the existence and uniqueness of the maximum likelihood estimators of the two parameters of the underlying lognormal distribution with Type-I censored data and grouped data. The proof was first established under the case of normal distribution and extended to the lognormal distribution through invariance property. The results are applied to estimate the median and mean of the lognormal population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I discuss geometry and normal forms for pseudo-Riemannian metrics with parallel spinor fields in some interesting dimensions. I also discuss the interaction of these conditions for parallel spinor fields with the condition that the Ricci tensor vanish (which, for pseudo-Riemannian manifolds, is not an automatic consequence of the existence of a nontrivial parallel spinor field).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous studies about the strength of the lithosphere in the Iberia centre fail to resolve the depth of earthquakes because of the rheological uncertainties. Therefore, new contributions are considered (the crustal structure from a density model) and several parameters (tectonic regime, mantle rheology, strain rate) are checked in this paper to properly examine the role of lithospheric strength in the intraplate seismicity and the Cenozoic evolution. The strength distribution with depth, the integrated strength, the effective elastic thickness and the seismogenic thickness have been calculated by a finite element modelling of the lithosphere across the Central System mountain range and the bordering Duero and Madrid sedimentary basins. Only a dry mantle under strike-slip/extension and a strain rate of 10-15 s-1, or under extension and 10-16 s-1, causes a strong lithosphere. The integrated strength and the elastic thickness are lower in the mountain chain than in the basins. These anisotropies have been maintained since the Cenozoic and determine the mountain uplift and the biharmonic folding of the Iberian lithosphere during the Alpine deformations. The seismogenic thickness bounds the seismic activity in the upper–middle crust, and the decreasing crustal strength from the Duero Basin towards the Madrid Basin is related to a parallel increase in Plio–Quaternary deformations and seismicity. However, elasto–plastic modelling shows that current African–Eurasian convergence is resolved elastically or ductilely, which accounts for the low seismicity recorded in this region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a JPEG-2000 compliant architecture capable of computing the 2 -D Inverse Discrete Wavelet Transform. The proposed architecture uses a single processor and a row-based schedule to minimize control and routing complexity and to ensure that processor utilization is kept at 100%. The design incorporates the handling of borders through the use of symmetric extension. The architecture has been implemented on the Xilinx Virtex2 FPGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Radiotherapy is planned to achieve the optimal physical dose distribution to the target tumour volume whilst minimising dose to the surrounding normal tissue. Recent in vitro experimental evidence has demonstrated an important role for intercellular communication in radiobiological responses following non-uniform exposures. This study aimed to model the impact of these effects in the context of techniques involving highly modulated radiation fields or spatially fractionated treatments such as GRID therapy.

METHODS: Using the small animal radiotherapy research platform (SARRP) as a key enabling technology to deliver precision imaged-guided radiotherapy, it is possible to achieve spatially modulated dose distributions that model typical clinical scenarios. In this work, we planned uniform and spatially fractionated dose distributions using multiple isocentres with beam sizes of 0.5 - 5 mm to obtain 50% volume coverage in a subcutaneous murine tumour model, and applied a model of cellular response that incorporates intercellular communication to assess the potential impact of signalling effects with different ranges.

RESULTS: Models of GRID treatment plans which incorporate intercellular signalling showed increased cell killing within the low dose region. This results in an increase in the Equivalent Uniform Dose (EUD) for GRID exposures compared to standard models, with some GRID exposures being predicted to be more effective than uniform delivery of the same physical dose.

CONCLUSIONS: This study demonstrates the potential impact of radiation induced signalling on tumour cell response for spatially fractionated therapies and identifies key experiments to validate this model and quantify these effects in vivo.

ADVANCES IN KNOWLEDGE: This study highlights the unique opportunities now possible using advanced preclinical techniques to develop a foundation for biophysical optimisation in radiotherapy treatment planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel surrogate model is proposed in lieu of computational fluid dynamic (CFD) code for fast nonlinear aerodynamic modeling. First, a nonlinear function is identified on selected interpolation points defined by discrete empirical interpolation method (DEIM). The flow field is then reconstructed by a least square approximation of flow modes extracted by proper orthogonal decomposition (POD). The proposed model is applied in the prediction of limit cycle oscillation for a plunge/pitch airfoil and a delta wing with linear structural model, results are validate against a time accurate CFD-FEM code. The results show the model is able to replicate the aerodynamic forces and flow fields with sufficient accuracy while requiring a fraction of CFD cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To determine good ecological status and conservation of the Sub-Marine area of the Bay of Biscay, the implementation of a new rocky intertidal habitats monitoring is needed. A protocol has been adapted from the Brittany protocol for the water body FRFC11 "Basque coast" for the two indicators "intertidal macroalgae" and "subtidal macroalgae" under the Water Framework Directive to qualify the ecological. However no protocol has been validated for fauna in front of meridional characters of the benthic communities. Investigations carried out on macroalgae communities on intertidal area in WFD framework, since 2008, constitute an important working basis. This is the aim of the Bigorno project (Intertidal Biodiversity of the south of the Bay of Biscay and Observation for New search and Monitoring for decision support), financed by the Agency of Marine Protected Areas and the Departmental Council. To implement knowledge, a sampling protocol has been used in 2015 on the boulder fields of Guéthary. This site is part of Natura 2000 area "rocky Basque coast and offshore extension "It constitutes also a Znieff site and restricted fishing area. The sampling strategy considers the heterogeneity of substrates and the presence of intertidal microhabitats. Two main habitats are present: "mediolittoral rock in exposed area habitat" and "boulder fields". Habitat "intertidal pools and permanent ponds" is also present but, it is not investigated. Sampling effort is of 353 quadrats of 0.1 m², drawn randomly according to a spatially stratified sampling plan, defined by habitat and algal belts. Taxa identification and enumeration are done on each quadrat. The objective of this work is to expose results from data collected during 2015 sampling program. The importance of characterizing benthic fauna communities spatial distribution belonging to the Basque coast according to algal belts defines during the WDF survey was highlighted. Concurrently, indicators of biodiversity were studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many tissue level models of neural networks are written in the language of nonlinear integro-differential equations. Analytical solutions have only been obtained for the special case that the nonlinearity is a Heaviside function. Thus the pursuit of even approximate solutions to such models is of interest to the broad mathematical neuroscience community. Here we develop one such scheme, for stationary and travelling wave solutions, that can deal with a certain class of smoothed Heaviside functions. The distribution that smoothes the Heaviside is viewed as a fundamental object, and all expressions describing the scheme are constructed in terms of integrals over this distribution. The comparison of our scheme and results from direct numerical simulations is used to highlight the very good levels of approximation that can be achieved by iterating the process only a small number of times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Case description: A 25 years old man presented with a laceration on radial side of proximal phalanx of 4th finger (zone II flexor) which was due to cut with glass. Clinical findings: The sheaths of Tendons of flexor digitorum sperficialis and profundus were not the same and each tendon had a separate sheath. Treatment and outcome: The tendons were reconstructed by modified Kessler sutures, after 15 months the patient had a 30 degrees of extension lag even after physiotherapy courses. Clinical relevance: This is the first reported of such normal variation in human hand tendon anatomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. It is a well-known fact that DVMs can also have extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and so without spurious ones, is called normal. For binary mixtures also the concept of supernormal DVMs was introduced, meaning that in addition to the DVM being normal, the restriction of the DVM to any single species also is normal. Here we introduce generalizations of this concept to DVMs for multicomponent mixtures. We also present some general algorithms for constructing such models and give some concrete examples of such constructions. One of our main results is that for any given number of species, and any given rational mass ratios we can construct a supernormal DVM. The DVMs are constructed in such a way that for half-space problems, as the Milne and Kramers problems, but also nonlinear ones, we obtain similar structures as for the classical discrete Boltzmann equation for one species, and therefore we can apply obtained results for the classical Boltzmann equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For more than two decades we have witnessed in Latin America –in Argentina particularly– the development of policies to expand the school day. We understand that the implementation of such policies is an opportunity to observe the behavior of the school’s behavior faced with the attempt to modify one of its hardest components –school-time–; it becomes also a natural laboratory to analyze how much does the traditional organization of school-time can resist, how does it change and how do these changes (if implemented) impact the rest of the school components (spaces, groups, etc.). This paper shows the state of the art of the most significant studies in two research fields, in the context of primary education, on this matter: on the one hand, the studies related to organization and extension of school time and, on the other hand, research on the structural and structuring components of school-related aspects. The literature review indicates that studies on school-time and on the corresponding extension policies and programs do not report the difficulties found when trying to modify the hard components of the school system. Studies with the ‘school system’ as object of study have not approached the numerous school-time extension experiences, although time is one of the structural elements of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jupiter and its moons are a complex dynamical system that include several phenomenon like tides interactions, moon's librations and resonances. One of the most interesting characteristics of the Jovian system is the presence of the Laplace resonance, where the orbital periods of Ganymede, Europa and Io maintain a 4:2:1 ratio respectively. It is interesting to study the role of the Laplace Resonance in the dynamic of the system, especially regarding the dissipative nature of the tidal interaction between Jupiter and its closest moon, Io. Numerous theories have been proposed regarding the orbital evolution of the Galilean satellites, but they disagree about the amount of dissipation of the system, therefore about the magnitude and the direction of the evolution of the system, mainly because of the lack of experimental data. The future JUICE space mission is a great opportunity to solve this dispute. JUICE is an ESA (European Space Agency) L-class mission (the largest category of missions in the ESA Cosmic Vision) that, at the beginning of 2030, will be inserted in the Jovian system and that will perform several flybys of the Galilean satellites, with the exception of Io. Subsequently, during the last part of the mission, it will orbit around Ganymede for nine months, with a possible extension of the mission. The data that JUICE will collect during the mission will have an exceptional accuracy, allowing to investigate several aspects of the dynamics the system, especially, the evolution of Laplace Resonance of the Galilean moons and its stability. This thesis will focus on the JUICE mission, in particular in the gravity estimation and orbit reconstruction of the Galilean satellites during the Jovian orbital phase using radiometric data. This is accomplished through an orbit determination technique called multi-arc approach, using the JPL's orbit determination software MONTE (Mission-analysis, Operations and Navigation Tool-kit Environment).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cryosurgery is an efficient therapeutic technique used to treat benign and malignant cutaneous diseases. The primary active mechanism of cryosurgery is related to vascular effects on treated tissue. After a cryosurgical procedure, exuberant granulation tissue is formed at the injection site, probably as a result of angiogenic stimulation of the cryogen and inflammatory response, particularly in endothelial cells. To evaluate the angiogenic effects of freezing, as part of the phenomenon of healing rat skin subjected to previous injury. Two incisions were made in each of the twenty rats, which were divided randomly into two groups of ten. After 3 days, cryosurgery with liquid nitrogen was performed in one of incisions. The rats' samples were then collected, cut and stained to conduct histopathological examination, to assess the local angiogenesis in differing moments and situations. It was possible to demonstrate that cryosurgery, in spite of promoting cell death and accentuated local inflammation soon after its application, induces quicker cell proliferation in the affected tissue and maintenance of this rate in a second phase, than in tissue healing without this procedure. These findings, together with the knowledge that there is a direct relationship between mononuclear cells and neovascularization (the development of a rich system of new vessels in injury caused by cold), suggest that cryosurgery possesses angiogenic stimulus, even though complete healing takes longer to occur. The significance level for statistical tests was 5% (p<0,05).