765 resultados para sample complexity
Resumo:
Little is known about the opinions, beliefs and behavior of Swiss physicians regarding physical activity (PA) promotion in a primary care setting. A qualitative study was performed with semi-structured interviews. We purposively recruited and interviewed 16 physicians in the French speaking part of Switzerland. Their statements and ideas regarding the promotion of PA in a primary care setting were transcribed and synthesized from the tape recorded interviews. Les opinions, les représentations et les comportements des médecins suisses en matière de promotion de l'activité physique au cabinet médical restent largement méconnus en Suisse. Une étude qualitative a été réalisée au moyen d'entretiens semi-structurés. Nous avons intentionnellement recruté et interviewé 16 médecins en Suisse romande. Leurs opinions et attitudes concernant la promotion de l'activité physique au cabinet médical ont été transcrites et synthétisées à partir de l'enregistrement de ces entretiens.
Inversion effect of "old" vs "new" faces, face-like objects, and objects in a healthy student sample
Resumo:
Neuroblastoma (NB) is a neural crest-derived childhood tumor characterized by a remarkable phenotypic diversity, ranging from spontaneous regression to fatal metastatic disease. Although the cancer stem cell (CSC) model provides a trail to characterize the cells responsible for tumor onset, the NB tumor-initiating cell (TIC) has not been identified. In this study, the relevance of the CSC model in NB was investigated by taking advantage of typical functional stem cell characteristics. A predictive association was established between self-renewal, as assessed by serial sphere formation, and clinical aggressiveness in primary tumors. Moreover, cell subsets gradually selected during serial sphere culture harbored increased in vivo tumorigenicity, only highlighted in an orthotopic microenvironment. A microarray time course analysis of serial spheres passages from metastatic cells allowed us to specifically "profile" the NB stem cell-like phenotype and to identify CD133, ABC transporter, and WNT and NOTCH genes as spheres markers. On the basis of combined sphere markers expression, at least two distinct tumorigenic cell subpopulations were identified, also shown to preexist in primary NB. However, sphere markers-mediated cell sorting of parental tumor failed to recapitulate the TIC phenotype in the orthotopic model, highlighting the complexity of the CSC model. Our data support the NB stem-like cells as a dynamic and heterogeneous cell population strongly dependent on microenvironmental signals and add novel candidate genes as potential therapeutic targets in the control of high-risk NB.
Resumo:
L'objectif de cette étude est d'examiner la structure factorielle et la consistance interne de la TAS-20 sur un échantillon d'adolescents (n = 264), ainsi que de décrire la distribution des caractéristiques alexithymiques dans cet échantillon. La structure à trois facteurs de la TAS-20 a été confirmée par notre analyse factorielle confirmatoire. La consistance interne, mesurée à l'aide d'alpha de Cronbach, est acceptable pour le premier facteur (difficulté à identifier les sentiments (DIF)), bonne pour le second (difficulté à verbaliser les sentiments (DDF)), mais en revanche, faible pour le troisième facteur (pensées orientées vers l'extérieur (EOT)). Les résultats d'une Anova mettent en évidence une tendance linéaire indiquant que plus l'âge augmente plus le niveau d'alexithymie (score total TAS-20), la difficulté à identifier les sentiments et les pensées orientées vers l'extérieur diminuent. En ce qui concerne la prévalence de l'alexithymie, on remarque en effet que 38,5 % des adolescents de moins de 16 ans sont considérés comme alexithymiques, contre 30,1 % des 16-17 ans et 22 % des plus de 17 ans. Notre étude indique donc que la TAS-20 est un instrument adéquat pour évaluer l'alexithymie à l'adolescence, tout en suggérant quelques précautions étant donné l'aspect développemental de cette période.
Resumo:
To assess the associations between alcohol consumption and cytokine levels (interleukin-1beta - IL-1β; interleukin-6 - IL-6 and tumor necrosis factor-α - TNF-α) in a Caucasian population. Population sample of 2884 men and 3201 women aged 35-75. Alcohol consumption was categorized as nondrinkers, low (1-6 drinks/week), moderate (7-13/week) and high (14+/week). No difference in IL-1β levels was found between alcohol consumption categories. Low and moderate alcohol consumption led to lower IL-6 levels: median (interquartile range) 1.47 (0.70-3.51), 1.41 (0.70-3.32), 1.42 (0.66-3.19) and 1.70 (0.83-4.39) pg/ml for nondrinkers, low, moderate and high drinkers, respectively, p<0.01, but this association was no longer significant after multivariate adjustment. Compared to nondrinkers, moderate drinkers had the lowest odds (Odds ratio=0.86 (0.71-1.03)) of being in the highest quartile of IL-6, with a significant (p<0.05) quadratic trend. Low and moderate alcohol consumption led to lower TNF-α levels: 2.92 (1.79-4.63), 2.83 (1.84-4.48), 2.82 (1.76-4.34) and 3.15 (1.91-4.73) pg/ml for nondrinkers, low, moderate and high drinkers, respectively, p<0.02, and this difference remained borderline significant (p=0.06) after multivariate adjustment. Moderate drinkers had a lower odds (0.81 [0.68-0.98]) of being in the highest quartile of TNF-α. No specific alcoholic beverage (wine, beer or spirits) effect was found. Moderate alcohol consumption is associated with lower levels of IL-6 and (to a lesser degree) of TNF-α, irrespective of the type of alcohol consumed. No association was found between IL-1β levels and alcohol consumption.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
We consider, both theoretically and empirically, how different organization modes are aligned to govern the efficient solving of technological problems. The data set is a sample from the Chinese consumer electronics industry. Following mainly the problem solving perspective (PSP) within the knowledge based view (KBV), we develop and test several PSP and KBV hypotheses, in conjunction with competing transaction cost economics (TCE) alternatives, in an examination of the determinants of the R&D organization mode. The results show that a firm’s existing knowledge base is the single most important explanatory variable. Problem complexity and decomposability are also found to be important, consistent with the theoretical predictions of the PSP, but it is suggested that these two dimensions need to be treated as separate variables. TCE hypotheses also receive some support, but the estimation results seem more supportive of the PSP and the KBV than the TCE.
Resumo:
This paper critically examines a number of issues relating to the measurement of tax complexity. It starts with an analysis of the concept of tax complexity, distinguishing tax design complexity and operational complexity. It considers the consequences/costs of complexity, and then examines the rationale for measuring complexity. Finally it applies the analysis to an examination of an index of complexity developed by the UK Office of Tax Simplification (OTS).
Resumo:
OBJECTIVES: To document biopsychosocial profiles of patients with rheumatoid arthritis (RA) by means of the INTERMED and to correlate the results with conventional methods of disease assessment and health care utilization. METHODS: Patients with RA (n = 75) were evaluated with the INTERMED, an instrument for assessing case complexity and care needs. Based on their INTERMED scores, patients were compared with regard to severity of illness, functional status, and health care utilization. RESULTS: In cluster analysis, a 2-cluster solution emerged, with about half of the patients characterized as complex. Complex patients scoring especially high in the psychosocial domain of the INTERMED were disabled significantly more often and took more psychotropic drugs. Although the 2 patient groups did not differ in severity of illness and functional status, complex patients rated their illness as more severe on subjective measures and on most items of the Medical Outcomes Study Short Form 36. Complex patients showed increased health care utilization despite a similar biologic profile. CONCLUSIONS: The INTERMED identified complex patients with increased health care utilization, provided meaningful and comprehensive patient information, and proved to be easy to implement and advantageous compared with conventional methods of disease assessment. Intervention studies will have to demonstrate whether management strategies based on INTERMED profiles can improve treatment response and outcome of complex patients.
Resumo:
Introduction: As part of the MicroArray Quality Control (MAQC)-II project, this analysis examines how the choice of univariate feature-selection methods and classification algorithms may influence the performance of genomic predictors under varying degrees of prediction difficulty represented by three clinically relevant endpoints. Methods: We used gene-expression data from 230 breast cancers (grouped into training and independent validation sets), and we examined 40 predictors (five univariate feature-selection methods combined with eight different classifiers) for each of the three endpoints. Their classification performance was estimated on the training set by using two different resampling methods and compared with the accuracy observed in the independent validation set. Results: A ranking of the three classification problems was obtained, and the performance of 120 models was estimated and assessed on an independent validation set. The bootstrapping estimates were closer to the validation performance than were the cross-validation estimates. The required sample size for each endpoint was estimated, and both gene-level and pathway-level analyses were performed on the obtained models. Conclusions: We showed that genomic predictor accuracy is determined largely by an interplay between sample size and classification difficulty. Variations on univariate feature-selection methods and choice of classification algorithm have only a modest impact on predictor performance, and several statistically equally good predictors can be developed for any given classification problem.
Resumo:
This paper develops a methodology to estimate the entire population distributions from bin-aggregated sample data. We do this through the estimation of the parameters of mixtures of distributions that allow for maximal parametric flexibility. The statistical approach we develop enables comparisons of the full distributions of height data from potential army conscripts across France's 88 departments for most of the nineteenth century. These comparisons are made by testing for differences-of-means stochastic dominance. Corrections for possible measurement errors are also devised by taking advantage of the richness of the data sets. Our methodology is of interest to researchers working on historical as well as contemporary bin-aggregated or histogram-type data, something that is still widely done since much of the information that is publicly available is in that form, often due to restrictions due to political sensitivity and/or confidentiality concerns.
Resumo:
Properties of GMM estimators for panel data, which have become very popular in the empirical economic growth literature, are not well known when the number of individuals is small. This paper analyses through Monte Carlo simulations the properties of various GMM and other estimators when the number of individuals is the one typically available in country growth studies. It is found that, provided that some persistency is present in the series, the system GMM estimator has a lower bias and higher efficiency than all the other estimators analysed, including the standard first-differences GMM estimator.