972 resultados para Maximum entropy statistical estimate


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Equations to predict maximum heart rate (HRmax) in heart failure (HF) patients receiving beta-adrenergic blocking (BB) agents do not consider the cause of HF. We determined equations to predict HRmax in patients with ischemic and nonischemic HF receiving BB therapy. Methods and Results: Using treadmill cardiopulmonary exercise testing, we studied HF patients receiving BB therapy being considered for transplantation from 1999 to 2010. Exclusions were pacemaker and/or implantable defibrillator, left ventricle ejection fraction (LVEF) >50%, peak respiratory exchange ratio (RER) <1.00, and Chagas disease. We used linear regression equations to predict HRmax based on age in ischemic and nonischemic patients. We analyzed 278 patients, aged 47 +/- 10 years, with ischemic (n = 75) and nonischemic (n = 203) HF. LVEF was 30.8 +/- 9.4% and 28.6 +/- 8.2% (P = .04), peak VO2 16.9 +/- 4.7 and 16.9 +/- 5.2 mL kg(-1) min(-1) (P = NS), and the HRmax 130.8 +/- 23.3 and 125.3 +/- 25.3 beats/min (P = .051) in ischemic and nonischemic patients, respectively. We devised the equation HRmax = 168 - 0.76 x age (R-2 = 0.095; P = .007) for ischemic HF patients, but there was no significant relationship between age and HRmax in nonischemic HF patients (R-2 = 0.006; P = NS). Conclusions: Our study suggests that equations to estimate HRmax should consider the cause of HF. (J Cardiac Fail 2012;18:831-836)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We used the statistical measurements of information entropy, disequilibrium and complexity to infer a hierarchy of equations of state for two types of compact stars from the broad class of neutron stars, namely, with hadronic composition and with strange quark composition. Our results show that, since order costs energy. Nature would favor the exotic strange stars even though the question of how to form the strange stars cannot be answered within this approach. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we carry out robust modeling and influence diagnostics in Birnbaum-Saunders (BS) regression models. Specifically, we present some aspects related to BS and log-BS distributions and their generalizations from the Student-t distribution, and develop BS-t regression models, including maximum likelihood estimation based on the EM algorithm and diagnostic tools. In addition, we apply the obtained results to real data from insurance, which shows the uses of the proposed model. Copyright (c) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study we analyzed the phylogeographic pattern and historical demography of an endemic Atlantic forest (AF) bird, Basileuterus leucoblepharus, and test the influence of the last glacial maximum (LGM) on its population effective size using coalescent simulations. We address two main questions: (i) Does B. leucoblepharus present population genetic structure congruent with the patterns observed for other AF organisms? (ii) How did the LGM affect the effective population size of B. leucoblepharus? We sequenced 914 bp of the mitochondrial gene cytochrome b and 512 bp of the nuclear intron 5 of beta-fibrinogen of 62 individuals from 15 localities along the AF. Both molecular markers revealed no genetic structure in B. leucoblepharus. Neutrality tests based on both loci showed significant demographic expansion. The extended Bayesian skyline plot showed that the species seems to have experienced demographic expansion starting around 300,000 years ago, during the late Pleistocene. This date does not coincide with the LGM and the dynamics of population size showed stability during the LGM. To further test the effect of the LGM on this species, we simulated seven demographic scenarios to explore whether populations suffered specific bottlenecks. The scenarios most congruent with our data were population stability during the LGM with bottlenecks older than this period. This is the first example of an AF organism that does not show phylogeographic breaks caused by vicariant events associated to climate change and geotectonic activities in the Quaternary. Differential ecological, environmental tolerances and habitat requirements are possibly influencing the different evolutionary histories of these organisms. Our results show that the history of organism diversification in this megadiverse Neotropical forest is complex. Crown Copyright (c) 2012 Published by Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis provides a thoroughly theoretical background in network theory and shows novel applications to real problems and data. In the first chapter a general introduction to network ensembles is given, and the relations with “standard” equilibrium statistical mechanics are described. Moreover, an entropy measure is considered to analyze statistical properties of the integrated PPI-signalling-mRNA expression networks in different cases. In the second chapter multilayer networks are introduced to evaluate and quantify the correlations between real interdependent networks. Multiplex networks describing citation-collaboration interactions and patterns in colorectal cancer are presented. The last chapter is completely dedicated to control theory and its relation with network theory. We characterise how the structural controllability of a network is affected by the fraction of low in-degree and low out-degree nodes. Finally, we present a novel approach to the controllability of multiplex networks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the fundamental interactions in the Standard Model of particle physicsrnis the strong force, which can be formulated as a non-abelian gauge theoryrncalled Quantum Chromodynamics (QCD). rnIn the low-energy regime, where the QCD coupling becomes strong and quarksrnand gluons are confined to hadrons, a perturbativernexpansion in the coupling constant is not possible.rnHowever, the introduction of a four-dimensional Euclidean space-timernlattice allows for an textit{ab initio} treatment of QCD and provides arnpowerful tool to study the low-energy dynamics of hadrons.rnSome hadronic matrix elements of interest receive contributionsrnfrom diagrams including quark-disconnected loops, i.e. disconnected quarkrnlines from one lattice point back to the same point. The calculation of suchrnquark loops is computationally very demanding, because it requires knowledge ofrnthe all-to-all propagator. In this thesis we use stochastic sources and arnhopping parameter expansion to estimate such propagators.rnWe apply this technique to study two problems which relay crucially on therncalculation of quark-disconnected diagrams, namely the scalar form factor ofrnthe pion and the hadronic vacuum polarization contribution to the anomalousrnmagnet moment of the muon.rnThe scalar form factor of the pion describes the coupling of a charged pion torna scalar particle. We calculate the connected and the disconnected contributionrnto the scalar form factor for three different momentum transfers. The scalarrnradius of the pion is extracted from the momentum dependence of the form factor.rnThe use ofrnseveral different pion masses and lattice spacings allows for an extrapolationrnto the physical point. The chiral extrapolation is done using chiralrnperturbation theory ($chi$PT). We find that our pion mass dependence of thernscalar radius is consistent with $chi$PT at next-to-leading order.rnAdditionally, we are able to extract the low energy constant $ell_4$ from thernextrapolation, and ourrnresult is in agreement with results from other lattice determinations.rnFurthermore, our result for the scalar pion radius at the physical point isrnconsistent with a value that was extracted from $pipi$-scattering data. rnThe hadronic vacuum polarization (HVP) is the leading-order hadronicrncontribution to the anomalous magnetic moment $a_mu$ of the muon. The HVP canrnbe estimated from the correlation of two vector currents in the time-momentumrnrepresentation. We explicitly calculate the corresponding disconnectedrncontribution to the vector correlator. We find that the disconnectedrncontribution is consistent with zero within its statistical errors. This resultrncan be converted into an upper limit for the maximum contribution of therndisconnected diagram to $a_mu$ by using the expected time-dependence of therncorrelator and comparing it to the corresponding connected contribution. Wernfind the disconnected contribution to be smaller than $approx5%$ of thernconnected one. This value can be used as an estimate for a systematic errorrnthat arises from neglecting the disconnected contribution.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Telomere length measurement has been proposed as a promising tool to estimate the age of individuals in natural populations. We used real-time quantitative PCR (qPCR) to measure relative telomere length in four tissues (brain, kidney, liver and muscle) of European hake (Merluccius merluccius) in different groups based upon body length an otolith age estimate. We observed a high level of inter-individual differences in the measurements of relative telomere length in hakes of similar age and body length groups. The results of qPCR analysis showed a great variability in all measures and a lack of repeatability and reproducibility with significant statistical differences in the results of the different assays. The paper discusses the technical reasons for the variability in qPCR obtained in this work and by other authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.