32 resultados para Average Case Complexity
Resumo:
The understanding of sedimentary evolution is intimately related to the knowledge of the exact ages of the sediments. When working on carbonate sediments, age dating is commonly based on paleontological observations and established biozonations, which may prove to be relatively imprecise. Dating by means of strontium isotope ratios in marine bioclasts is the probably best method in order to precisely date carbonate successions, provided that the sample reflects original marine geochemical characteristics. This requires a precise study of the samples including its petrography, SEM and cathodoluminescence observations, stable carbon and oxygen isotope geochemistry and finally the strontium isotope measurement itself. On the Nicoya Peninsula (Northwestern Costa Rica) sediments from the Piedras Blancas Formation, Nambi Formation and Quebrada Pavas Formation were dated by the means of strontium isotope ratios measured in Upper Cretaceous Inoceramus shell fragments. Results have shown average 87Sr/86Sr values of 0.707654 (middle late Campanian) for the Piedras Blancas Formation, 0.707322 (Turonian-Coniacian) for the Nambi Formation and 0.707721 (late Campanian-Maastrichtian) for the Quebrada Pavas Formation. Abundant detrital components in the studied formations constitute a difficulty to strontium isotope dating. In fact, the fossil bearing sediments can easily contaminate the target fossil with strontium mobilized form basalts during diagenesis and thus the obtained strontium isotope ratios may be influenced significantly and so will the obtained ages. The new and more precise age assignments allow for more precision in the chronostratigraphic chart of the sedimentary and tectonic evolution of the Nicoya Peninsula, providing a better insight on the evolution of this region. Meteor Cruise M81 dredged shallow water carbonates from the Hess Rise and Hess Escarpment during March 2010. Several of these shallow water carbonates contain abundant Larger Foraminifera that indicates an Eocene-Oligocene age. In this study the strontium isotope values ranging from 0.707847 to 0.708238 can be interpreted as a Rupelian to Chattian age of these sediments. These platform sediments are placed on seamounts, now located at depths reaching 1600 m. Observation of sedimentologic characteristics of these sediments has helped to resolve apparent discrepancies between fossil and strontium isotope ages. Hence, it is possible to show that the subsidence was active during early Miocene times. On La Désirade (Guadeloupe France), the Neogene to Quaternary carbonate cover has been dated by microfossils and some U/Th-ages. Disagreements subsisted in the paleontological ages of the formations. Strontium isotope ratios ranging from 0.709047 to 0.709076 showed the Limestone Table of La Désirade to range from an Early Pliocene to Late Pliocene/early Pleistocene age. A very late Miocene age (87Sr/86Sr =0.709013) can be determined to the Detrital Offshore Limestone. The flat volcanic basement had to be eroded by wave-action during a long-term stable relative sea-level. Sediments of the Table Limestone on La Désirade show both low-stand and high-stand facies that encroach on the igneous basement, implying deposition during a major phase of subsidence creating accommodation space. Subsidence is followed by tectonic uplift documented by fringing reefs and beach rocks that young from the top of the Table Limestone (180 m) towards the present coastline. Strontium isotope ratios from two different fringing reefs (0.707172 and 0.709145) and from a beach rock (0.709163) allow tentative dating, (125ky, ~ 400ky, 945ky) and indicate an uplift rate of about 5cm/ky for this time period of La Désirade Island. The documented subsidence and uplift history calls for a new model of tectonic evolution of the area.
Resumo:
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.
Resumo:
The public primary school system in the State of Geneva, Switzerland, is characterized by centrally evaluated pupil performance measured with the use of standardized tests. As a result, consistent data are collected among the system. The 2010-2011 dataset is used to develop a two-stage data envelopment analysis (DEA) of school efficiency. In the first stage, DEA is employed to calculate an individual efficiency score for each school. It shows that, on average, each school could reduce its inputs by 7% whilst maintaining the same quality of pupil performance. The cause of inefficiency lies in perfectible management. In the second stage, efficiency is regressed on school characteristics and environmental variables;external factors outside of the control of headteachers. The model is tested for multicollinearity, heteroskedasticity and endogeneity. Four variables are identified as statistically significant. School efficiency is negatively influenced by (1) the provision of special education, (2) the proportion of disadvantaged pupils enrolled at the school and (3) operations being held on multiple sites, but positively influenced by school size (captured by the number of pupils). The proportion of allophone pupils; schools located in urban areas and the provision of reception classes for immigrant pupils are not significant. Although the significant variables influencing school efficiency are outside of the control of headteachers, it is still possible to either boost the positive impact or curb the negative impact. Dans le canton de Genève (Suisse), les écoles publiques primaires sont caractérisées par un financement assuré par les collectivités publiques (canton et communes) et par une évaluation des élèves à l'aide d'épreuves standardisées à trois moments distincts de leur scolarité. Cela permet de réunir des informations statistiques consistantes. La base de données de l'année 2010-2011 est utilisée dans une analyse en deux étapes de l'efficience des écoles. Dans une première étape, la méthode d'analyse des données par enveloppement (DEA) est utilisée pour calculer un score d'efficience pour chaque école. Cette analyse démontre que l'efficience moyenne des écoles s'élève à 93%. Chaque école pourrait, en moyenne, réduire ses ressources de 7% tout en conservant constants les résultats des élèves aux épreuves standardisées. La source de l'inefficience réside dans un management des écoles perfectible. Dans une seconde étape, les scores d'efficience sont régressés sur les caractéristiques des écoles et sur des variables environnementales. Ces variables ne sont pas sous le contrôle (ou l'influence) des directeurs d'école. Le modèle est testé pour la multicolinéartié, l'hétéroscédasticité et l'endogénéité. Quatre variables sont statistiquement significatives. L'efficience des écoles est influencée négativement par (1) le fait d'offrir un enseignement spécialisé en classe séparée, (2) la proporition d'élèves défavorisés et (3) le fait d'opérer sur plusieurs sites différents. L'efficience des écoles est influencée positivement par la taille de l'école, mesurée par le nombre d'élèves. La proporition d'élèves allophones, le fait d'être situé dans une zone urbaine et d'offrir des classes d'accueil pour les élèves immigrants constituent autant de variables non significatives. Le fait que les variables qui influencent l'efficience des écoles ne soient pas sous le contrôle des directeurs ne signifie pas qu'il faille céder au fatalisme. Différentes pistes sont proposées pour permettre soit de réduire l'impact négatif soit de tirer parti de l'impact positif des variables significatives.
Resumo:
Abstract Purpose: To describe viral retinitis following intravitreal and periocular corticosteroid administration. Methods: Retrospective case series and comprehensive literature review. Results: We analyzed 5 unreported and 25 previously published cases of viral retinitis following local corticosteroid administration. Causes of retinitis included 23 CMV (76.7%), 5 HSV (16.7%), and 1 each VZV and unspecified (3.3%). Two of 22 tested patients (9.1%) were HIV positive. Twenty-one of 30 (70.0%) cases followed one or more intravitreal injections of triamcinolone acetonide (TA), 4 (13.3%) after one or more posterior sub-Tenon injections of TA, 3 (10.0%) after placement of a 0.59-mg fluocinolone acetonide implant (Retisert), and 1 (3.3%) each after an anterior subconjunctival injection of TA (together with IVTA), an anterior chamber injection, and an anterior sub-Tenon injection. Mean time from most recent corticosteroid administration to development of retinitis was 4.2 months (median 3.8; range 0.25-13.0). Twelve patients (40.0%) had type II diabetes mellitus. Treatments used included systemic antiviral agents (26/30, 86.7%), intravitreal antiviral injections (20/30, 66.7%), and ganciclovir intravitreal implants (4/30, 13.3%). Conclusions: Viral retinitis may develop or reactivate following intraocular or periocular corticosteroid administration. Average time to development of retinitis was 4 months, and CMV was the most frequently observed agent. Diabetes was a frequent co-morbidity and several patients with uveitis who developed retinitis were also receiving systemic immunosuppressive therapy.
Resumo:
Introduction: Patients who repeatedly attend the Emergency Department (ED) often have a distinct and complex vulnerability profile that includes poor somatic, psychological, and social indicators. This profile has an impact on the patients' well-being as well as on hospital costs. The objective of the study was to specify the characteristics of hyper users (HU) and explore the connection with ED care and hospital costs. Methods: The study sample comprised all adult patients with 12 or more attendances at the ED of the Lausanne University Hospital in 2009. The data were collected by retrospectively searching internal databases to identify the patients concerned and then analysing the profiles of these patients. Information gathered included demographic, somatic, psychological, at-risk behaviour, and social indicators, and health system consumption including costs. Results: In 2009, 23 patients (0.1%) attended 12 times or more (425 attendances, 0.8%). The average age was about 43 years, 60.9% were female, and 47.8% single. Of these 95.7% had basic insurance, 87.0% had a general practitioner, and 30.4% were under legal guardianship. The majority attended in the evening or at night (67.1%), and almost one quarter of these attendances resulted in inpatient treatment (24.0%). Most HU had attended the ED in previous years too (95.7% in 2008). The most prevalent diagnoses concerned 'mental disorders' (87.0%). About 30.4% of patients had attempted suicide (all were female patients). Other frequent diagnoses concerned 'trauma' (65.2%), and the 'digestive' and the 'nervous system' (each 56.5%). At-risk behaviour such as severe alcohol consumption (34.8%), or excessive use of medicines (26.1%) was very frequent, and some patients used illicit drugs (21.7%). There was only a weak association between the number of ED attendances and the resulting costs. However, a reduction of one outpatient visit per patient would have decreased ED outpatient costs by 8.5%. Conclusions: HU often have a particularly vulnerable profile. Mental problems are prevalent among them, as are at-risk behaviour and severe somatic conditions. The complexity of the patients' profiles demands specific care that cannot be guaranteed within an everyday ED routine. The use of an interdisciplinary case management team might be a promising approach in diminishing the number of attendances and the associated costs, although the profiles of HU are such that they probably cannot completely give up ED attendance.
Resumo:
A 41-year-old male presented with severe frostbite that was monitored clinically and with a new laser Doppler imaging (LDI) camera that records arbitrary microcirculatory perfusion units (1-256 arbitrary perfusion units (APU's)). LDI monitoring detected perfusion differences in hand and foot not seen visually. On day 4-5 after injury, LDI showed that while fingers did not experience any significant perfusion change (average of 31±25 APUs on day 5), the patient's left big toe did (from 17±29 APUs day 4 to 103±55 APUs day 5). These changes in regional perfusion were not detectable by visual examination. On day 53 postinjury, all fingers with reduced perfusion by LDI were amputated, while the toe could be salvaged. This case clearly demonstrates that insufficient microcirculatory perfusion can be identified using LDI in ways which visual examination alone does not permit, allowing prognosis of clinical outcomes. Such information may also be used to develop improved treatment approaches.
Resumo:
Objectives The relevance of the SYNTAX score for the particular case of patients with acute ST- segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) has previously only been studied in the setting of post hoc analysis of large prospective randomized clinical trials. A "real-life" population approach has never been explored before. The aim of this study was to evaluate the impact of the SYNTAX score for the prediction of the myocardial infarction size, estimated by the creatin-kinase (CK) peak value, using the SYNTAX score in patients treated with primary coronary intervention for acute ST-segment elevation myocardial infarction. Methods The primary endpoint of the study was myocardial infarction size as measured by the CK peak value. The SYNTAX score was calculated retrospectively in 253 consecutive patients with acute ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) in a large tertiary referral center in Switzerland, between January 2009 and June 2010. Linear regression analysis was performed to compare myocardial infarction size with the SYNTAX score. This same endpoint was then stratified according to SYNTAX score tertiles: low <22 (n=178), intermediate [22-32] (n=60), and high >=33 (n=15). Results There were no significant differences in terms of clinical characteristics between the three groups. When stratified according to the SYNTAX score tertiles, average CK peak values of 1985 (low<22), 3336 (intermediate [22-32]) and 3684 (high>=33) were obtained with a p-value <0.0001. Bartlett's test for equal variances between the three groups was 9.999 (p-value <0.0067). A moderate Pearson product-moment correlation coefficient (r=0.4074) with a high statistical significance level (p-value <0.0001) was found. The coefficient of determination (R^2=0.1660) showed that approximately 17% of the variation of CK peak value (myocardial infarction size) could be explained by the SYNTAX score, i.e. by the coronary disease complexity. Conclusion In an all-comers population, the SYNTAX score is an additional tool in predicting myocardial infarction size in patients treated with primary percutaneous coronary intervention (PPCI). The stratification of patients in different risk groups according to SYNTAX enables to identify a high-risk population that may warrant particular patient care.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
The student´s screening made by schools corresponds to a regulatory mechanism for school inclusion and exclusion that normally overlaps the parental expectations of school choice. Based in "Parents survey 2006" data (n=188.073) generated by the Chilean Educational Ministry, this paper describe the parents reasons for choosing their children's school, and school´s criteria for screening students. It concludes that the catholic schools are the most selective institutions and usually exceed the capacity of parental choice. One of the reasons to select students would be the direct relationship between this practice and increasing the average score on the test of the Chilean Educational Quality Measurement System (SIMCE).
Resumo:
Games are powerful and engaging. On average, one billion people spend at least 1 hour a day playing computer and videogames. This is even more true with the younger generations. Our students have become the < digital natives >, the < gamers >, the < virtual generation >. Research shows that those who are most at risk for failure in the traditional classroom setting, also spend more time than their counterparts, using video games. They might strive, given a different learning environment. Educators have the responsibility to align their teaching style to these younger generation learning styles. However, many academics resist the use of computer-assisted learning that has been "created elsewhere". This can be extrapolated to game-based teaching: even if educational games were more widely authored, their adoption would still be limited to the educators who feel a match between the authored games and their own beliefs and practices. Consequently, game-based teaching would be much more widespread if teachers could develop their own games, or at least customize them. Yet, the development and customization of teaching games are complex and costly. This research uses a design science methodology, leveraging gamification techniques, active and cooperative learning theories, as well as immersive sandbox 3D virtual worlds, to develop a method which allows management instructors to transform any off-the-shelf case study into an engaging collaborative gamified experience. This method is applied to marketing case studies, and uses the sandbox virtual world of Second Life. -- Les jeux sont puissants et motivants, En moyenne, un milliard de personnes passent au moins 1 heure par jour jouer à des jeux vidéo sur ordinateur. Ceci se vérifie encore plus avec les jeunes générations, Nos étudiants sont nés à l'ère du numérique, certains les appellent des < gamers >, d'autres la < génération virtuelle >. Les études montrent que les élèves qui se trouvent en échec scolaire dans les salles de classes traditionnelles, passent aussi plus de temps que leurs homologues à jouer à des jeux vidéo. lls pourraient potentiellement briller, si on leur proposait un autre environnement d'apprentissage. Les enseignants ont la responsabilité d'adapter leur style d'enseignement aux styles d'apprentissage de ces jeunes générations. Toutefois, de nombreux professeurs résistent lorsqu'il s'agit d'utiliser des contenus d'apprentissage assisté par ordinateur, développés par d'autres. Ceci peut être extrapolé à l'enseignement par les jeux : même si un plus grand nombre de jeux éducatifs était créé, leur adoption se limiterait tout de même aux éducateurs qui perçoivent une bonne adéquation entre ces jeux et leurs propres convictions et pratiques. Par conséquent, I'enseignement par les jeux serait bien plus répandu si les enseignants pouvaient développer leurs propres jeux, ou au moins les customiser. Mais le développement de jeux pédagogiques est complexe et coûteux. Cette recherche utilise une méthodologie Design Science pour développer, en s'appuyant sur des techniques de ludification, sur les théories de pédagogie active et d'apprentissage coopératif, ainsi que sur les mondes virtuels immersifs < bac à sable > en 3D, une méthode qui permet aux enseignants et formateurs de management, de transformer n'importe quelle étude de cas, provenant par exemple d'une centrale de cas, en une expérience ludique, collaborative et motivante. Cette méthode est appliquée aux études de cas Marketing dans le monde virtuel de Second Life.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVES: The aim of this study was to describe the demographic, social and medical characteristics, and healthcare use of highly frequent users of a university hospital emergency department (ED) in Switzerland. METHODS: A retrospective consecutive case series was performed. We included all highly frequent users, defined as patients attending the ED 12 times or more within a calendar year (1 January 2009 to 31 December 2009). We collected their characteristics and calculated a score of accumulation of risk factors of vulnerability. RESULTS: Highly frequent users comprised 0.1% of ED patients, and they accounted for 0.8% of all ED attendances (23 patients, 425 attendances). Of all highly frequent users, 87% had a primary care practitioner, 82.6% were unemployed, 73.9% were socially isolated, and 60.9% had a mental health or substance use primary diagnosis. One-third had attempted suicide during study period, all of them being women. They were often admitted (24.0% of attendances), and only 8.7% were uninsured. On average, they cumulated 3.3 different risk factors of vulnerability (SD 1.4). CONCLUSION: Highly frequent users of a Swiss academic ED are a highly vulnerable population. They are in poor health and accumulate several risk factors of being even in poorer health. The small number of patients and their high level of insurance coverage make it particularly feasible to design a specific intervention to approach their needs, in close collaboration with their primary care practitioner. Elaboration of the intervention should focus on social reinsertion and risk-reduction strategies with regard to substance use, hospital admissions and suicide.
Resumo:
ABSTRACT: INTRODUCTION: Prospective epidemiologic studies have consistently shown that levels of circulating androgens in postmenopausal women are positively associated with breast cancer risk. However, data in premenopausal women are limited. METHODS: A case-control study nested within the New York University Women's Health Study was conducted. A total of 356 cases (276 invasive and 80 in situ) and 683 individually-matched controls were included. Matching variables included age and date, phase, and day of menstrual cycle at blood donation. Testosterone, androstenedione, dehydroandrosterone sulfate (DHEAS) and sex hormone-binding globulin (SHBG) were measured using direct immunoassays. Free testosterone was calculated. RESULTS: Premenopausal serum testosterone and free testosterone concentrations were positively associated with breast cancer risk. In models adjusted for known risk factors of breast cancer, the odds ratios for increasing quintiles of testosterone were 1.0 (reference), 1.5 (95% confidence interval (CI), 0.9 to 2.3), 1.2 (95% CI, 0.7 to 1.9), 1.4 (95% CI, 0.9 to 2.3) and 1.8 (95% CI, 1.1 to 2.9; Ptrend = 0.04), and for free testosterone were 1.0 (reference), 1.2 (95% CI, 0.7 to 1.8), 1.5 (95% CI, 0.9 to 2.3), 1.5 (95% CI, 0.9 to 2.3), and 1.8 (95% CI, 1.1 to 2.8, Ptrend = 0.01). A marginally significant positive association was observed with androstenedione (P = 0.07), but no association with DHEAS or SHBG. Results were consistent in analyses stratified by tumor type (invasive, in situ), estrogen receptor status, age at blood donation, and menopausal status at diagnosis. Intra-class correlation coefficients for samples collected from 0.8 to 5.3 years apart (median 2 years) in 138 cases and 268 controls were greater than 0.7 for all biomarkers except for androstenedione (0.57 in controls). CONCLUSIONS: Premenopausal concentrations of testosterone and free testosterone are associated with breast cancer risk. Testosterone and free testosterone measurements are also highly reliable (that is, a single measurement is reflective of a woman's average level over time). Results from other prospective studies are consistent with our results. The impact of including testosterone or free testosterone in breast cancer risk prediction models for women between the ages of 40 and 50 years should be assessed. Improving risk prediction models for this age group could help decision making regarding both screening and chemoprevention of breast cancer.
Resumo:
Place branding is not a new phenomenon. The emphasis placed on place branding has recently become particularly strong and explicit to both practitioners and scholars, in the current context of a growing mobility of capital and people. On the one hand, there is a need for practitioners to better understand place brands and better implement place branding strategies. In this respect, this domain of study can be currently seen as 'practitioner led', and in this regard many contributions assess specific cases in order to find success factors and best practices for place branding. On the other hand, at a more analytical level, recent studies show the complexity of the concept of place branding and argue that place branding works as a process including various stakeholders, in which culture and identity play a crucial role. In the literature, tourists, companies and residents represent the main target groups of place branding. The issues regarding tourists and companies have been examined since long by place promoters, location branders, economists or other scholars. However, the analysis of residents' role in place branding has been overlooked until recently and represents a new interest for researchers. The present research aims to further develop the concept of place branding, both theoretically and empirically. First of all, the paper presents a theoretical overview of place branding, from general basic questions (definition of place, brand and place brand) to specific current debates of the literature. Subsequently, the empirical part consists in a case study of the Grand Genève (Great Geneva).