987 resultados para APPROXIMATIONS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results of the research systematized on this analysis sought apprehend the linkage of the socio-educational service network, destined to adolescents who comply with socioeducational measure of confinement, in the region of the Seridó of the state of the Rio Grande do Norte, especially in the city of Caicó, central town of this region. The achievement of this study was stimulated by the interest in unraveling the contradictory reality imposed by neoliberal State, sparing the guarantee of rights, especially to these teens, who are seen as authors of violations and are stigmatized by capitalist society. The research was carried in the period July-September 2013, under critical perspective, using the documental analysis and the observational techniques and interviews with professionals of the Educational Center (CEDUC), of the Unified Health System (SUS), of the Social Policies of Social Assistance, and of the State Department of Education, which should make the service network that gravitates around the National System of Socio-educational Services (SINASE). The Statute of Children and Adolescents (ECA) and SINASE define that the application of socioeducational measures cannot occur isolated of the public policies, becoming indispensable the linkages of the system with the social policies of social assistance, education and health. However, it was observed that the neoliberal logic of the capitalist State has developed broken, disconnected, focal and superficial social policies, who fail give effect to the rights acquired beyond the legal sphere. In this perspective, it is possible affirm that the everyday of the Brazilian poor teens is marked by the action of the State, which aims to control those who disturb the order of capital, who threaten the production, the market, the consume and the private property. This way, actions are promoted criminalizing poverty and imprint a legal action over this expression of the social issue to the detriment of social policies that meet the real needs of adolescents. Face of this reality, it becomes necessary to put on the agenda of the here and now to fight for rights, aiming at a broad public debate involving professionals, researchers and social movements in support of the viability of rights, which aims to support reflections and to strengthen ways to confront this social problem. With the approximations of this study, it was learned that the struggle for rights is a fight for another project of society, beyond what is laid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dark matter is a fundamental ingredient of the modern Cosmology. It is necessary in order to explain the process of structures formation in the Universe, rotation curves of galaxies and the mass discrepancy in clusters of galaxies. However, although many efforts, in both aspects, theoretical and experimental, have been made, the nature of dark matter is still unknown and the only convincing evidence for its existence is gravitational. This rises doubts about its existence and, in turn, opens the possibility that the Einstein’s gravity needs to be modified at some scale. We study, in this work, the possibility that the Eddington-Born-Infeld (EBI) modified gravity provides en alternative explanation for the mass discrepancy in clusters of galaxies. For this purpose we derive the modified Einstein field equations and find their solutions to a spherical system of identical and collisionless point particles. Then, we took into account the collisionless relativistic Boltzmann equation and using some approximations and assumptions for weak gravitational field, we derived the generalized virial theorem in the framework of EBI gravity. In order to compare the predictions of EBI gravity with astrophysical observations we estimated the order of magnitude of the geometric mass, showing that it is compatible with present observations. Finally, considering a power law for the density of galaxies in the cluster, we derived expressions for the radial velocity dispersion of the galaxies, which can be used for testing some features of the EBI gravity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dark matter is a fundamental ingredient of the modern Cosmology. It is necessary in order to explain the process of structures formation in the Universe, rotation curves of galaxies and the mass discrepancy in clusters of galaxies. However, although many efforts, in both aspects, theoretical and experimental, have been made, the nature of dark matter is still unknown and the only convincing evidence for its existence is gravitational. This rises doubts about its existence and, in turn, opens the possibility that the Einstein’s gravity needs to be modified at some scale. We study, in this work, the possibility that the Eddington-Born-Infeld (EBI) modified gravity provides en alternative explanation for the mass discrepancy in clusters of galaxies. For this purpose we derive the modified Einstein field equations and find their solutions to a spherical system of identical and collisionless point particles. Then, we took into account the collisionless relativistic Boltzmann equation and using some approximations and assumptions for weak gravitational field, we derived the generalized virial theorem in the framework of EBI gravity. In order to compare the predictions of EBI gravity with astrophysical observations we estimated the order of magnitude of the geometric mass, showing that it is compatible with present observations. Finally, considering a power law for the density of galaxies in the cluster, we derived expressions for the radial velocity dispersion of the galaxies, which can be used for testing some features of the EBI gravity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Survival models deals with the modelling of time to event data. In certain situations, a share of the population can no longer be subjected to the event occurrence. In this context, the cure fraction models emerged. Among the models that incorporate a fraction of cured one of the most known is the promotion time model. In the present study we discuss hypothesis testing in the promotion time model with Weibull distribution for the failure times of susceptible individuals. Hypothesis testing in this model may be performed based on likelihood ratio, gradient, score or Wald statistics. The critical values are obtained from asymptotic approximations, which may result in size distortions in nite sample sizes. This study proposes bootstrap corrections to the aforementioned tests and Bartlett bootstrap to the likelihood ratio statistic in Weibull promotion time model. Using Monte Carlo simulations we compared the nite sample performances of the proposed corrections in contrast with the usual tests. The numerical evidence favors the proposed corrected tests. At the end of the work an empirical application is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of using software based on numerical approximations for metal forming is given by the need to ensure process efficiency in order to get high quality products at lowest cost and shortest time. This study uses the theory of similitude in order to develop a technique capable of simulating the stamping process of a metal sheet, obtaining results close to the real values, with shorter processing times. The results are obtained through simulations performed in the finite element software STAMPACK®. This software uses the explicit integration method in time, which is usually applied to solve nonlinear problems involving contact, such as the metal forming processes. The technique was developed from a stamping model of a square box, simulated with four different scale factors, two higher and two smaller than the real scale. The technique was validated with a bending model of a welded plate, which had a high simulation time. The application of the technique allowed over 50% of decrease in the time of simulation. The results for the application of the scale technique for forming plates were satisfactory, showing good quantitative results related to the decrease of the total time of simulation. Finally, it is noted that the decrease in simulation time is only possible with the use of two related scales, the geometric and kinematic scale. The kinematic scale factors should be used with caution, because the high speeds can cause dynamic problems and could influence the results of the simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last two decades, the field of homogeneous gold catalysis has been

extremely active, growing at a rapid pace. Another rapidly-growing field—that of

computational chemistry—has often been applied to the investigation of various gold-

catalyzed reaction mechanisms. Unfortunately, a number of recent mechanistic studies

have utilized computational methods that have been shown to be inappropriate and

inaccurate in their description of gold chemistry. This work presents an overview of

available computational methods with a focus on the approximations and limitations

inherent in each, and offers a review of experimentally-characterized gold(I) complexes

and proposed mechanisms as compared with their computationally-modeled

counterparts. No aim is made to identify a “recommended” computational method for

investigations of gold catalysis; rather, discrepancies between experimentally and

computationally obtained values are highlighted, and the systematic errors between

different computational methods are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cleaner shrimp (Decapoda) regularly interact with conspecifics and client reef fish, both of which appear colourful and finely patterned to human observers. However, whether cleaner shrimp can perceive the colour patterns of conspecifics and clients is unknown, because cleaner shrimp visual capabilities are unstudied. We quantified spectral sensitivity and temporal resolution using electroretinography (ERG), and spatial resolution using both morphological (inter-ommatidial angle) and behavioural (optomotor) methods in three cleaner shrimp species: Lysmata amboinensis, Ancylomenes pedersoni and Urocaridella antonbruunii. In all three species, we found strong evidence for only a single spectral sensitivity peak of (mean ± s.e.m.) 518 ± 5, 518 ± 2 and 533 ± 3 nm, respectively. Temporal resolution in dark-adapted eyes was 39 ± 1.3, 36 ± 0.6 and 34 ± 1.3 Hz. Spatial resolution was 9.9 ± 0.3, 8.3 ± 0.1 and 11 ± 0.5 deg, respectively, which is low compared with other compound eyes of similar size. Assuming monochromacy, we present approximations of cleaner shrimp perception of both conspecifics and clients, and show that cleaner shrimp visual capabilities are sufficient to detect the outlines of large stimuli, but not to detect the colour patterns of conspecifics or clients, even over short distances. Thus, conspecific viewers have probably not played a role in the evolution of cleaner shrimp appearance; rather, further studies should investigate whether cleaner shrimp colour patterns have evolved to be viewed by client reef fish, many of which possess tri- and tetra-chromatic colour vision and relatively high spatial acuity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with two concepts which serve as a support for studies on sport: "sport culture" and "sport habitus". We aim to identify points of intersection and differences between them in the Brazilian field of Physical Education. We conclude that sport culture concept corresponds to the descriptive and structural dimensions of culture conformation in relation to sport in contemporary society, marked by the phenomenon of economic globalization. This concept refers to the symbolic conception of culture that understands the meanings and directions assigned to the sport phenomenon and to its practice by different individuals and social groups. The sport habitus gradually acquired through the exposure of social agents to the logic of the sport field, corresponds to a willingness to think, to make sense and to act in this space. Not everything that is culturally produced in sports is incorporated in the form of sport habitus, but the habitus is based on aspects of culture. That is, the sport culture has a plurality of manifestations that are not always incorporated in the form of a habitus. The aspect that differentiates the manipulation of concepts of culture and sport habitus is the analytical focus of each, working with both in a complementary manner might become a theoretical and methodological option that contributes to the enlargement of the analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with two concepts which serve as a support for studies on sport: "sport culture" and "sport habitus". We aim to identify points of intersection and differences between them in the Brazilian field of Physical Education. We conclude that sport culture concept corresponds to the descriptive and structural dimensions of culture conformation in relation to sport in contemporary society, marked by the phenomenon of economic globalization. This concept refers to the symbolic conception of culture that understands the meanings and directions assigned to the sport phenomenon and to its practice by different individuals and social groups. The sport habitus gradually acquired through the exposure of social agents to the logic of the sport field, corresponds to a willingness to think, to make sense and to act in this space. Not everything that is culturally produced in sports is incorporated in the form of sport habitus, but the habitus is based on aspects of culture. That is, the sport culture has a plurality of manifestations that are not always incorporated in the form of a habitus. The aspect that differentiates the manipulation of concepts of culture and sport habitus is the analytical focus of each, working with both in a complementary manner might become a theoretical and methodological option that contributes to the enlargement of the analysis.