967 resultados para Modèle non-standard


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an assessment of the practical value of existing traditional and non-standard measures for discriminating healthy people from people with Parkinson's disease (PD) by detecting dysphonia. We introduce a new measure of dysphonia, Pitch Period Entropy (PPE), which is robust to many uncontrollable confounding effects including noisy acoustic environments and normal, healthy variations in voice frequency. We collected sustained phonations from 31 people, 23 with PD. We then selected 10 highly uncorrelated measures, and an exhaustive search of all possible combinations of these measures finds four that in combination lead to overall correct classification performance of 91.4%, using a kernel support vector machine. In conclusion, we find that non-standard methods in combination with traditional harmonics-to-noise ratios are best able to separate healthy from PD subjects. The selected non-standard methods are robust to many uncontrollable variations in acoustic environment and individual subjects, and are thus well-suited to telemonitoring applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A generalized systematic description of the Two-Wave Mixing (TWM) process in sillenite crystals allowing for arbitrary orientation of the grating vector is presented. An analytical expression for the TWM gain is obtained for the special case of plane waves in a thin crystal (|g|d«1) with large optical activity (|g|/?«1, g is the coupling constant, ? the rotatory power, d the crystal thickness). Using a two-dimensional formulation the scope of the nonlinear equations describing TWM can be extended to finite beams in arbitrary geometries and to any crystal parameters. Two promising applications of this formulation are proposed. The polarization dependence of the TWM gain is used for the flattening of Gaussian beam profiles without expanding them. The dependence of the TWM gain on the interaction length is used for the determination of the crystal orientation. Experiments carried out on Bi12GeO20 crystals of a non-standard cut are in good agreement with the results of modelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nonlinear CW pump broadening over non-standard transmission fiber is used for the first time to achieve superior gain variation performance in a single-pump broadband Raman amplifier. A threefold increase in the bandwidth for 0.1 dB gain variation is reported.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 26E35, 14H05, 14H20.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: Primary 18G35; Secondary 55U15.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate by means of Monte Carlo simulation and finite-size scaling analysis the critical properties of the three dimensional O (5) non-linear σ model and of the antiferromagnetic RP^(2) model, both of them regularized on a lattice. High accuracy estimates are obtained for the critical exponents, universal dimensionless quantities and critical couplings. It is concluded that both models belong to the same universality class, provided that rather non-standard identifications are made for the momentum-space propagator of the RP^(2) model. We have also investigated the phase diagram of the RP^(2) model extended by a second-neighbor interaction. A rich phase diagram is found, where most of the phase transitions are of the first order.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research on the relationship between reproductive work and women´s life trajectories including the experience of labour migration has mainly focused on the case of relatively young mothers who leave behind, or later re-join, their children. While it is true that most women migrate at a younger age, there are a significant number of cases of men and women who move abroad for labour purposes at a more advanced stage, undertaking a late-career migration. This is still an under-estimated and under-researched sub-field that uncovers a varied range of issues, including the global organization of reproductive work and the employment of migrant women as domestic workers late in their lives. By pooling the findings of two qualitative studies, this article focuses on Peruvian and Ukrainian women who seek employment in Spain and Italy when they are well into their forties, or older. A commonality the two groups of women share is that, independently of their level of education and professional experience, more often than not they end up as domestic and care workers. The article initially discusses the reasons for late-career female migration, taking into consideration the structural and personal determinants that have affected Peruvian and Ukrainian women’s careers in their countries of origin and settlement. After this, the focus is set on the characteristics of domestic employment at later life, on the impact on their current lives, including the transnational family organization, and on future labour and retirement prospects. Apart from an evaluation of objective working and living conditions, we discuss women’s personal impressions of being domestic workers in the context of their occupational experiences and family commitments. In this regard, women report varying levels of personal and professional satisfaction, as well as different patterns of continuity-discontinuity in their work and family lives, and of optimism towards the future. Divergences could be, to some extent, explained by the effect of migrants´ transnational social practices and policies of states.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context: Model atmosphere analyses have been previously undertaken for both Galactic and extragalactic B-type supergiants. By contrast, little attention has been given to a comparison of the properties of single supergiants and those that are members of multiple systems. 

Aims: Atmospheric parameters and nitrogen abundances have been estimated for all the B-type supergiants identified in the VLT-FLAMES Tarantula survey. These include both single targets and binary candidates. The results have been analysed to investigate the role of binarity in the evolutionary history of supergiants. 

Methods: tlusty non-local thermodynamic equilibrium (LTE) model atmosphere calculations have been used to determine atmospheric parameters and nitrogen abundances for 34 single and 18 binary supergiants. Effective temperatures were deduced using the silicon balance technique, complemented by the helium ionisation in the hotter spectra. Surface gravities were estimated using Balmer line profiles and microturbulent velocities deduced using the silicon spectrum. Nitrogen abundances or upper limits were estimated from the Nii spectrum. The effects of a flux contribution from an unseen secondary were considered for the binary sample. Results. We present the first systematic study of the incidence of binarity for a sample of B-type supergiants across the theoretical terminal age main sequence (TAMS). To account for the distribution of effective temperatures of the B-type supergiants it may be necessary to extend the TAMS to lower temperatures. This is also consistent with the derived distribution of mass discrepancies, projected rotational velocities and nitrogen abundances, provided that stars cooler than this temperature are post-red supergiant objects. For all the supergiants in the Tarantula and in a previous FLAMES survey, the majority have small projected rotational velocities. The distribution peaks at about 50 km s-1 with 65% in the range 30 km s-1 ≤ νe sin i ≤ 60 km s-1. About ten per cent have larger ve sin i (≥100 km s-1), but surprisingly these show little or no nitrogen enhancement. All the cooler supergiants have low projected rotational velocities of ≤70 km s-1 and high nitrogen abundance estimates, implying that either bi-stability braking or evolution on a blue loop may be important. Additionally, there is a lack of cooler binaries, possibly reflecting the small sample sizes. Single-star evolutionary models, which include rotation, can account for all of the nitrogen enhancement in both the single and binary samples. The detailed distribution of nitrogen abundances in the single and binary samples may be different, possibly reflecting differences in their evolutionary history. 

Conclusions: The first comparative study of single and binary B-type supergiants has revealed that the main sequence may be significantly wider than previously assumed, extending to Teff = 20 000 K. Some marginal differences in single and binary atmospheric parameters and abundances have been identified, possibly implying non-standard evolution for some of the sample. This sample as a whole has implications for several aspects of our understanding of the evolutionary status of blue supergiants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’effet du climat sur la croissance de la végétation est depuis longtemps un fait acquis. Les changements climatiques globaux ont entrainé une augmentation des efforts de recherche sur l’impact de ces changements en milieux naturels, à la fois en termes de distribution et d’abondance des espèces, mais également à travers l’étude des rendements des espèces commerciales. La présente étude vise à déterminer, à travers l’utilisation de relevés dendrochronologiques, les effets de variables climatiques sur la croissance de l’épinette noire et du sapin baumier à l’échelle de la forêt boréale du Québec. Le but est d’identifier les principaux modificateurs climatiques responsables de la croissance des peuplements boréaux en fonction de leur âge et de leur localisation. Se focalisant sur un modèle non-linéaire des moindres carrés incorporant les modificateurs climatiques et un modificateur d’âge, la modélisation de la croissance en surface terrière en fonction de ces critères a permis de détecter des différences entre le sapin baumier et l’épinette noire. Les résultats montrent que les deux espèces réagissent surtout à la longueur de la saison de croissance et aux températures estivales maximales. L’épinette noire semble également plus sensible aux conditions de sécheresse. Les modèles basés sur l’âge ainsi que sur la localisation le long d’un gradient nord-sud révèlent quelques différences, notamment concernant la réaction plus prononcée des jeunes peuplements au climat, en particulier aux températures, tandis que les vieux peuplements sont sensibles au rayonnement solaire. L’étude démontre tout de même une relative indépendance de l’épinette vis-à-vis du gradient latitudinal, à l’inverse du sapin. Les résultats permettent de discuter des modifications de productivité de ces espèces liées à l’allongement de la saison de croissance (gain pour les deux essences) et aux températures croissantes en conjonction avec les précipitations (perte due à la sécheresse pour l’épinette), dans un contexte de changements climatiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The non-standard decoding of the CUG codon in Candida cylindracea raises a number of questions about the evolutionary process of this organism and other species Candida clade for which the codon is ambiguous. In order to find some answers we studied the transcriptome of C. cylindracea, comparing its behavior with that of Saccharomyces cerevisiae (standard decoder) and Candida albicans (ambiguous decoder). The transcriptome characterization was performed using RNA-seq. This approach has several advantages over microarrays and its application is booming. TopHat and Cufflinks were the software used to build the protocol that allowed for gene quantification. About 95% of the reads were mapped on the genome. 3693 genes were analyzed, of which 1338 had a non-standard start codon (TTG/CTG) and the percentage of expressed genes was 99.4%. Most genes have intermediate levels of expression, some have little or no expression and a minority is highly expressed. The distribution profile of the CUG between the three species is different, but it can be significantly associated to gene expression levels: genes with fewer CUGs are the most highly expressed. However, CUG content is not related to the conservation level: more and less conserved genes have, on average, an equal number of CUGs. The most conserved genes are the most expressed. The lipase genes corroborate the results obtained for most genes of C. cylindracea since they are very rich in CUGs and nothing conserved. The reduced amount of CUG codons that was observed in highly expressed genes may be due, possibly, to an insufficient number of tRNA genes to cope with more CUGs without compromising translational efficiency. From the enrichment analysis, it was confirmed that the most conserved genes are associated with basic functions such as translation, pathogenesis and metabolism. From this set, genes with more or less CUGs seem to have different functions. The key issues on the evolutionary phenomenon remain unclear. However, the results are consistent with previous observations and shows a variety of conclusions that in future analyzes should be taken into consideration, since it was the first time that such a study was conducted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a scotogenic model, i.e. a one-loop neutrino mass model with dark right-handed neutrino gauge singlets and one inert dark scalar gauge doublet eta, which has symmetries that lead to co-bimaximal mixing, i.e. to an atmospheric mixing angle theta(23) = 45 degrees and to a CP-violating phase delta = +/-pi/2, while the mixing angle theta(13) remains arbitrary. The symmetries consist of softly broken lepton numbers L-alpha (alpha = e, mu, tau), a non-standard CP symmetry, and three L-2 symmetries. We indicate two possibilities for extending the model to the quark sector. Since the model has, besides eta, three scalar gauge doublets, we perform a thorough discussion of its scalar sector. We demonstrate that it can accommodate a Standard Model-like scalar with mass 125 GeV, with all the other charged and neutral scalars having much higher masses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Planar <110> GaAs nanowires and quantum dots grown by atmospheric MOCVD have been introduced to non-standard growth conditions such as incorporating Zn and growing them on free-standing suspended films and on 10° off-cut substrates. Zn doped nanowires exhibited periodic notching along the axis of the wire that is dependent on Zn/Ga gas phase molar ratios. Planar nanowires grown on suspended thin films give insight into the mobility of the seed particle and change in growth direction. Nanowires that were grown on the off-cut sample exhibit anti-parallel growth direction changes. Quantum dots are grown on suspended thin films and show preferential growth at certain temperatures. Envisioned nanowire applications include twin-plane superlattices, axial pn-junctions, nanowire lasers, and the modulation of nanowire growth direction against an impeding barrier and varying substrate conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 20: Health and Care Networks