861 resultados para paraconsistent model theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a generalized test case generation method, called the G method. Although inspired by the W method, the G method, in contrast, allows for test case suite generation even in the absence of characterization sets for the specification models. Instead, the G method relies on knowledge about the index of certain equivalences induced at the implementation models. We show that the W method can be derived from the G method as a particular case. Moreover, we discuss some naturally occurring infinite classes of FSM models over which the G method generates test suites that are exponentially more compact than those produced by the W method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural and electronic properties of the PtnTM55-n (TM = Co, Rh, Au) nanoalloys are investigated using density functional theory within the generalized gradient approximation and employing the all-electron projected augmented wave method. For TM = Co and Rh, the excess energy, which measures the relative energy stability of the nanoalloys, is negative for all Pt compositions. We found that the excess energy has similar values for a wide range of Pt compositions, i.e., n = 20-42 and n = 28-42 for Co and Rh, respectively, with the core shell icosahedron-like configuration (n = 42) being slightly more stable for both Co and Rh systems because of the larger release of the strain energy due to the smaller atomic size of the Co and Rh atoms. For TM = Au, the excess energy is positive for all compositions, except for n = 13, which is energetically favorable due to the formation of the core-shell structure (Pt in the core and Au atoms at the surface). Thus, our calculations confirm that the formation of core-shell structures plays an important role to increase the stability of nanoalloys. The center of gravity of the occupied d-states changes almost linearly as a function of the Pt composition, and hence, based on the d-band model, the magnitude of the adsorption energy of an adsorbate can be tuned by changing the Pt composition. The magnetic moments of PtnCo55-n decrease almost linearly as a function of the Pt composition; however, the same does not hold for PtRh and PtAu. We found an enhancement of the magnetic moments of PtRh by a few times by increasing Pt composition, which we explain by the compression effects induced by the large size of the Pt atoms compared with the Rh atoms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show, in the imaginary time formalism, that the temperature dependent parts of all the retarded (advanced) amplitudes vanish in the Schwinger model. We trace this behavior to the CPT invariance of the theory and give a physical interpretation of this result in terms of forward scattering amplitudes of on-shell thermal particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The President of Brazil established an Interministerial Work Group in order to “evaluate the model of classification and valuation of disabilities used in Brazil and to define the elaboration and adoption of a unique model for all the country”. Eight Ministries and/or Secretaries participated in the discussion over a period of 10 months, concluding that a proposed model should be based on the United Nations Convention on the Rights of Person with Disabilities, the International Classification of Functioning, Disability and Health, and the ‘support theory’, and organizing a list of recommendations and necessary actions for a Classification, Evaluation and Certification Network with national coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remanufacturing is the process of rebuilding used products that ensures that the quality of remanufactured products is equivalent to that of new ones. Although the theme is gaining ground, it is still little explored due to lack of knowledge, the difficulty of visualizing it systemically, and implementing it effectively. Few models treat remanufacturing as a system. Most of the studies still treated remanufacturing as an isolated process, preventing it from being seen in an integrated manner. Therefore, the aim of this work is to organize the knowledge about remanufacturing, offering a vision of remanufacturing system and contributing to an integrated view about the theme. The methodology employed was a literature review, adopting the General Theory of Systems to characterize the remanufacturing system. This work consolidates and organizes the elements of this system, enabling a better understanding of remanufacturing and assisting companies in adopting the concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to apply inverse dynamics control for a six degree of freedom flight simulator motion system. Imperfect compensation of the inverse dynamic control is intentionally introduced in order to simplify the implementation of this approach. The control strategy is applied in the outer loop of the inverse dynamic control to counteract the effects of imperfect compensation. The control strategy is designed using H∞ theory. Forward and inverse kinematics and full dynamic model of a six degrees of freedom motion base driven by electromechanical actuators are briefly presented. Describing function, acceleration step response and some maneuvers computed from the washout filter were used to evaluate the performance of the controllers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction.We have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. In order to calibrate the camera we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. We describe two general techniques to extract a sequence of corresponding points from multiple views of an object. The resulting sequence of points will be used later to reconstruct a set of 3D points representing the object surfaces on the scene. We have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research work concerns the analysis of the foundations of Quantum Field Theory carried out from an educational perspective. The whole research has been driven by two questions: • How the concept of object changes when moving from classical to contemporary physics? • How are the concepts of field and interaction shaped and conceptualized within contemporary physics? What makes quantum field and interaction similar to and what makes them different from the classical ones? The whole work has been developed through several studies: 1. A study aimed to analyze the formal and conceptual structures characterizing the description of the continuous systems that remain invariant in the transition from classical to contemporary physics. 2. A study aimed to analyze the changes in the meanings of the concepts of field and interaction in the transition to quantum field theory. 3. A detailed study of the Klein-Gordon equation aimed at analyzing, in a case considered emblematic, some interpretative (conceptual and didactical) problems in the concept of field that the university textbooks do not address explicitly. 4. A study concerning the application of the “Discipline-Culture” Model elaborated by I. Galili to the analysis of the Klein-Gordon equation, in order to reconstruct the meanings of the equation from a cultural perspective. 5. A critical analysis, in the light of the results of the studies mentioned above, of the existing proposals for teaching basic concepts of Quantum Field Theory and particle physics at the secondary school level or in introductory physics university courses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite intensive research during the last decades, thetheoreticalunderstanding of supercooled liquids and the glasstransition is stillfar from being complete. Besides analytical investigations,theso-called energy-landscape approach has turned out to beveryfruitful. In the literature, many numerical studies havedemonstratedthat, at sufficiently low temperatures, all thermodynamicquantities can be predicted with the help of the propertiesof localminima in the potential-energy-landscape (PEL). The main purpose of this thesis is to strive for anunderstanding ofdynamics in terms of the potential energy landscape. Incontrast to the study of static quantities, this requirestheknowledge of barriers separating the minima.Up to now, it has been the general viewpoint that thermallyactivatedprocesses ('hopping') determine the dynamics only belowTc(the critical temperature of mode-coupling theory), in thesense that relaxation rates follow from local energybarriers.As we show here, this viewpoint should be revisedsince the temperature dependence of dynamics is governed byhoppingprocesses already below 1.5Tc.At the example of a binary mixture of Lennard-Jonesparticles (BMLJ),we establish a quantitative link from the diffusioncoefficient,D(T), to the PEL topology. This is achieved in three steps:First, we show that it is essential to consider wholesuperstructuresof many PEL minima, called metabasins, rather than singleminima. Thisis a consequence of strong correlations within groups of PELminima.Second, we show that D(T) is inversely proportional to theaverageresidence time in these metabasins. Third, the temperaturedependenceof the residence times is related to the depths of themetabasins, asgiven by the surrounding energy barriers. We further discuss that the study of small (but not toosmall) systemsis essential, in that one deals with a less complex energylandscapethan in large systems. In a detailed analysis of differentsystemsizes, we show that the small BMLJ system consideredthroughout thethesis is free of major finite-size-related artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation is structured in three parts. The first part compares US and EU agricultural policies since the end of WWII. There is not enough evidence for claiming that agricultural support has a negative impact on obesity trends. I discuss the possibility of an exchange in best practices to fight obesity. There are relevant economic, societal and legal differences between the US and the EU. However, partnerships against obesity are welcomed. The second part presents a socio-ecological model of the determinants of obesity. I employ an interdisciplinary model because it captures the simultaneous influence of several variables. Obesity is an interaction of pre-birth, primary and secondary socialization factors. To test the significance of each factor, I use data from the National Longitudinal Survey of Adolescent Health. I compare the average body mass index across different populations. Differences in means are statistically significant. In the last part I use the National Survey of Children Health. I analyze the effect that family characteristics, built environment, cultural norms and individual factors have on the body mass index (BMI). I use Ordered Probit models and I calculate the marginal effects. I use State and ethnicity fixed effects to control for unobserved heterogeneity. I find that southern US States tend have on average a higher probability of being obese. On the ethnicity side, White Americans have a lower BMI respect to Black Americans, Hispanics and American Indians Native Islanders; being Asian is associated with a lower probability of being obese. In neighborhoods where trust level and safety perception are higher, children are less overweight and obese. Similar results are shown for higher level of parental income and education. Breastfeeding has a negative impact. Higher values of measures of behavioral disorders have a positive and significant impact on obesity, as predicted by the theory.