913 resultados para count data models
Resumo:
Since the discovery of the Higgs boson at the LHC, its use as a probe to search for beyond the standard model physics, such as supersymmetry, has become important, as seen in a recent search by the CMS experiment using razor variables in the diphoton final state. Motivated by this search, this thesis examines the LHC discovery potential of a SUSY scenario involving bottom squark pair production with a Higgs boson in the final state. We design and implement a software-based trigger using the razor variables for the CMS experiment to record events with a bottom quark-antiquark pair from a Higgs boson. We characterize the full range of signatures at the LHC from this Higgs-aware SUSY scenario and demonstrate the sensitivity of the CMS data to this model.
Resumo:
The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.
Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.
Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.
Resumo:
Background: Recently, with the access of low toxicity biological and targeted therapies, evidence of the existence of a long-term survival subpopulation of cancer patients is appearing. We have studied an unselected population with advanced lung cancer to look for evidence of multimodality in survival distribution, and estimate the proportion of long-term survivors. Methods: We used survival data of 4944 patients with non-small-cell lung cancer (NSCLC) stages IIIb-IV at diagnostic, registered in the National Cancer Registry of Cuba (NCRC) between January 1998 and December 2006. We fitted one-component survival model and two-component mixture models to identify short-and long-term survivors. Bayesian information criterion was used for model selection. Results: For all of the selected parametric distributions the two components model presented the best fit. The population with short-term survival (almost 4 months median survival) represented 64% of patients. The population of long-term survival included 35% of patients, and showed a median survival around 12 months. None of the patients of short-term survival was still alive at month 24, while 10% of the patients of long-term survival died afterwards. Conclusions: There is a subgroup showing long-term evolution among patients with advanced lung cancer. As survival rates continue to improve with the new generation of therapies, prognostic models considering short-and long-term survival subpopulations should be considered in clinical research.
Resumo:
This paper deals with the convergence of a remote iterative learning control system subject to data dropouts. The system is composed by a set of discrete-time multiple input-multiple output linear models, each one with its corresponding actuator device and its sensor. Each actuator applies the input signals vector to its corresponding model at the sampling instants and the sensor measures the output signals vector. The iterative learning law is processed in a controller located far away of the models so the control signals vector has to be transmitted from the controller to the actuators through transmission channels. Such a law uses the measurements of each model to generate the input vector to be applied to its subsequent model so the measurements of the models have to be transmitted from the sensors to the controller. All transmissions are subject to failures which are described as a binary sequence taking value 1 or 0. A compensation dropout technique is used to replace the lost data in the transmission processes. The convergence to zero of the errors between the output signals vector and a reference one is achieved as the number of models tends to infinity.
Resumo:
We develop and test a method to estimate relative abundance from catch and effort data using neural networks. Most stock assessment models use time series of relative abundance as their major source of information on abundance levels. These time series of relative abundance are frequently derived from catch-per-unit-of-effort (CPUE) data, using general linearized models (GLMs). GLMs are used to attempt to remove variation in CPUE that is not related to the abundance of the population. However, GLMs are restricted in the types of relationships between the CPUE and the explanatory variables. An alternative approach is to use structural models based on scientific understanding to develop complex non-linear relationships between CPUE and the explanatory variables. Unfortunately, the scientific understanding required to develop these models may not be available. In contrast to structural models, neural networks uses the data to estimate the structure of the non-linear relationship between CPUE and the explanatory variables. Therefore neural networks may provide a better alternative when the structure of the relationship is uncertain. We use simulated data based on a habitat based-method to test the neural network approach and to compare it to the GLM approach. Cross validation and simulation tests show that the neural network performed better than nominal effort and the GLM approach. However, the improvement over GLMs is not substantial. We applied the neural network model to CPUE data for bigeye tuna (Thunnus obesus) in the Pacific Ocean.
Resumo:
Cerca de 97% das crianças brasileiras iniciam a amamentação ao peito nas primeiras horas de vida. No entanto, o início do desmame é precoce, ocorrendo nas primeiras semanas ou meses de vida, com a introdução de água, chás, sucos, outros leites e alimentos. Fatores sociais, culturais, psicológicos e econômicos, ligados à mãe e ao bebê, podem estar relacionados a variações das práticas alimentares de crianças nos primeiros meses de vida. O objetivo do trabalho foi investigar a associação entre rede e apoio social e as práticas alimentares de lactentes no quarto mês de vida. Foi feito um estudo seccional inserido em uma coorte prospectiva, tendo como população fonte recém-nascidos acolhidos em Unidades Básicas de Saúde da Secretaria Municipal de Saúde do Rio de Janeiro. Para avaliar as práticas alimentares foi aplicado às mães (n=313) um recordatório 24h adaptado e foram construídos dois indicadores considerando o consumo de alimentos sólidos e da alimentação láctea. Para medir rede social foram feitas perguntas relacionadas ao número de parentes e amigos com quem a mulher pode contar e à participação em atividades sociais em grupo. Para aferir apoio social foi utilizada uma escala utilizada no Medical Outcomes Study (MOS) e adaptada para uso no Brasil. A análise dos dados se baseou em modelos de regressão logística multinomial, estimando-se razões de chance e respectivos intervalos de 95% de confiança para as associações entre as variáveis. Observou-se 16% dos lactentes em aleitamento materno exclusivo (AME), 18,8% em aleitamento materno predominante (AMP), aproximadamente 48% em uso de leite materno associado a outros alimentos e 16,5% em aleitamento artificial. Em relação ao aleitamento complementar, 25,9% consumiam alimentos sólidos e 37,5% alimentos lácteos. Crianças filhas de mães que referiram menor número de parentes com quem contar e com baixo apoio social apresentaram maior chance de estar em aleitamento artificial em relação ao AME, quando comparadas com filhas de mães que referiram poder contar com parentes ou com nível alto de apoio social. O baixo apoio social nas dimensões emocional/informação apresentou associação com AMP. Tendo em vista os achados apresentados, destaca-se a necessidade de integrar os membros da rede social da mulher à atenção pré-natal, ao parto e puerpério de modo que esta rede possa prover o apoio social que atenda as suas necessidades e, assim, contribuir para iniciação e manutenção do AME.
Resumo:
Objective: Although dobutamine is widely used in neonatal clinical practice, the evidence for its use in this specific population is not clear. We conducted a systematic review of the use of dobutamine in juvenile animals to determine whether the evidence from juvenile animal experiments with dobutamine supported the design of clinical trials in neonatal/ paediatric population. Methods: Studies were identified by searching MEDLINE (1946-2012) and EMBASE (1974-2012). Articles retrieved were independently reviewed by three authors and only those concerning efficacy and safety of the drug in juvenile animals were included. Only original articles published in English and Spanish were included. Results: Following our literature search, 265 articles were retrieved and 24 studies were included in the review: 17 focused on neonatal models and 7 on young animal models. Although the aims and design of these studies, as well as the doses and ages analysed, were quite heterogeneous, the majority of authors agree that dobutamine infusion improves cardiac output in a dose dependent manner. Moreover, the cardiovascular effects of dobutamine are influenced by postnatal age, as well as by the dose used and the duration of the therapy. There is inadequate information about the effects of dobutamine on cerebral perfusion to draw conclusions. Conclusion: There is enough preclinical evidence to ensure that dobutamine improves cardiac output, however to better understand its effects in peripheral organs, such as the brain, more specific and well designed studies are required to provide additional data to support the design of clinical trials in a paediatric population.
Resumo:
The natural mortality rate (M) of fish varies with size and age, although it is often assumed to be constant in stock assessments. Misspecification of M may bias important assessment quantities. We simulated fishery data, using an age-based population model, and then conducted stock assessments on the simulated data. Results were compared to known values. Misspecification of M had a negligible effect on the estimation of relative stock depletion; however, misspecification of M had a large effect on the estimation of parameters describing the stock recruitment relationship, age-specific selectivity, and catchability. If high M occurs in juvenile and old fish, but is misspecified in the assessment model, virgin biomass and catchability are often poorly estimated. In addition, stock recruitment relationships are often very difficult to estimate, and steepness values are commonly estimated at the upper bound (1.0) and overfishing limits tend to be biased low. Natural mortality can be estimated in assessment models if M is constant across ages or if selectivity is asymptotic. However if M is higher in old fish and selectivity is dome-shaped, M and the selectivity cannot both be adequately estimated because of strong interactions between M and selectivity.
Resumo:
Body-size measurement errors are usually ignored in stock assessments, but may be important when body-size data (e.g., from visual sur veys) are imprecise. We used experiments and models to quantify measurement errors and their effects on assessment models for sea scallops (Placopecten magellanicus). Errors in size data obscured modes from strong year classes and increased frequency and size of the largest and smallest sizes, potentially biasing growth, mortality, and biomass estimates. Modeling techniques for errors in age data proved useful for errors in size data. In terms of a goodness of model fit to the assessment data, it was more important to accommodate variance than bias. Models that accommodated size errors fitted size data substantially better. We recommend experimental quantification of errors along with a modeling approach that accommodates measurement errors because a direct algebraic approach was not robust and because error parameters were diff icult to estimate in our assessment model. The importance of measurement errors depends on many factors and should be evaluated on a case by case basis.
Resumo:
Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contributions are results from simulation experiments designed to measure the accuracy of statistical inferences derived from some of these models. Our results show that a model commonly used to analyze calibration data can provide unreliable statistical results when there is between-tow spatial variation in the stock densities at each paired-tow site. However, a generalized linear mixed-effects model gave very reliable results over a wide range of spatial variations in densities and we recommend it for the analysis of paired-tow survey calibration data. This conclusion also applies if there is between-tow variation in catchability.
Resumo:
Molecular markers have been demonstrated to be useful for the estimation of stock mixture proportions where the origin of individuals is determined from baseline samples. Bayesian statistical methods are widely recognized as providing a preferable strategy for such analyses. In general, Bayesian estimation is based on standard latent class models using data augmentation through Markov chain Monte Carlo techniques. In this study, we introduce a novel approach based on recent developments in the estimation of genetic population structure. Our strategy combines analytical integration with stochastic optimization to identify stock mixtures. An important enhancement over previous methods is the possibility of appropriately handling data where only partial baseline sample information is available. We address the potential use of nonmolecular, auxiliary biological information in our Bayesian model.
Resumo:
Recent player tracking technology provides new information about basketball game performance. The aim of this study was to (i) compare the game performances of all-star and non all-star basketball players from the National Basketball Association (NBA), and (ii) describe the different basketball game performance profiles based on the different game roles. Archival data were obtained from all 2013-2014 regular season games (n = 1230). The variables analyzed included the points per game, minutes played and the game actions recorded by the player tracking system. To accomplish the first aim, the performance per minute of play was analyzed using a descriptive discriminant analysis to identify which variables best predict the all-star and non all-star playing categories. The all-star players showed slower velocities in defense and performed better in elbow touches, defensive rebounds, close touches, close points and pull-up points, possibly due to optimized attention processes that are key for perceiving the required appropriate environmental information. The second aim was addressed using a k-means cluster analysis, with the aim of creating maximal different performance profile groupings. Afterwards, a descriptive discriminant analysis identified which variables best predict the different playing clusters. The results identified different playing profile of performers, particularly related to the game roles of scoring, passing, defensive and all-round game behavior. Coaching staffs may apply this information to different players, while accounting for individual differences and functional variability, to optimize practice planning and, consequently, the game performances of individuals and teams.
Resumo:
In this work we show the results obtained applying a Unified Dark Matter (UDM) model with a fast transition to a set of cosmological data. Two different functions to model the transition are tested, and the feasibility of both models is explored using CMB shift data from Planck [1], Galaxy Clustering data from [2] and [3], and Union2.1 SNe Ia [4]. These new models are also statistically compared with the ACDM and quiessence models using Bayes factor through evidence. Bayesian inference does not discard the UDM models in favor of ACDM.
Resumo:
In this work we calibrate two different analytic models of semilocal strings by constraining the values of their free parameters. In order to do so, we use data obtained from the largest and most accurate field theory simulations of semilocal strings to date, and compare several key properties with the predictions of the models. As this is still work in progress, we present some preliminary results together with descriptions of the methodology we are using in the characterisation of semilocal string networks.
Resumo:
This study was an attempt to apply land-based GIS analysis for freshwater aquaculture planning in the Red River Delta of Vietnam. It was based on diverse data sources in order to help decision makers at the site and also to contribute to the modelling of selection processes for aquaculture development planning in the region.