37 resultados para Discipline and Punish
Resumo:
This study examined the validity and reliability of a sequential "Run-Bike-Run" test (RBR) in age-group triathletes. Eight Olympic distance (OD) specialists (age 30.0 ± 2.0 years, mass 75.6 ± 1.6 kg, run VO2max 63.8 ± 1.9 ml· kg(-1)· min(-1), cycle VO2peak 56.7 ± 5.1 ml· kg(-1)· min(-1)) performed four trials over 10 days. Trial 1 (TRVO2max) was an incremental treadmill running test. Trials 2 and 3 (RBR1 and RBR2) involved: 1) a 7-min run at 15 km· h(-1) (R1) plus a 1-min transition to 2) cycling to fatigue (2 W· kg(-1) body mass then 30 W each 3 min); 3) 10-min cycling at 3 W· kg(-1) (Bsubmax); another 1-min transition and 4) a second 7-min run at 15 km· h(-1) (R2). Trial 4 (TT) was a 30-min cycle - 20-min run time trial. No significant differences in absolute oxygen uptake (VO2), heart rate (HR), or blood lactate concentration ([BLA]) were evidenced between RBR1 and RBR2. For all measured physiological variables, the limits of agreement were similar, and the mean differences were physiologically unimportant, between trials. Low levels of test-retest error (i.e. ICC <0.8, CV<10%) were observed for most (logged) measurements. However [BLA] post R1 (ICC 0.87, CV 25.1%), [BLA] post Bsubmax (ICC 0.99, CV 16.31) and [BLA] post R2 (ICC 0.51, CV 22.9%) were least reliable. These error ranges may help coaches detect real changes in training status over time. Moreover, RBR test variables can be used to predict discipline specific and overall TT performance. Cycle VO2peak, cycle peak power output, and the change between R1 and R2 (deltaR1R2) in [BLA] were most highly related to overall TT distance (r = 0.89, p < 0. 01; r = 0.94, p < 0.02; r = 0.86, p < 0.05, respectively). The percentage of TR VO2max at 15 km· h(-1), and deltaR1R2 HR, were also related to run TT distance (r = -0.83 and 0.86, both p < 0.05).
Resumo:
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders.
Resumo:
La recherche de substances ayant pu jouer un rôle direct ou indirect dans la cause de la mort ou ayant pu modifier le comportement d'un individu constitue la mission première de la toxicologie forensique. L'augmentation de la consommation de substances illégales et de médicaments dans les sociétés modernes a fait connaître à la toxicologie forensique un essor très important ces dernières décennies. D'autre part, les développements technologiques analytiques ont permis d'obtenir des outils très sensibles et spécifiques pour la recherche et le dosage d'une multitude de substances dans une grande variété d'échantillons biologiques, parfois présentes à des concentrations très faibles, conséquence d'une dose équivalente à la prise d'une seule unité galénique. Forensic toxicology has to bring evidence of substances that could have been involved directly or indirectly in the cause of death or that could influence the behaviour of somebody. The increase of the consumption of illegal and legal drugs in modern societies during last decades gave a boost to forensic toxicology. Moreover, improvement with analytical technology gave tools with high degrees of sensitivity and specificity for the screening and quantification of a large amount of substances in various biological specimens, even with very low concentration resulting of a single dose of medication.
Resumo:
Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.
Resumo:
Pharmacogenetics, the study of how individual genetic profiles influence the response to drugs, is an important topic. Results from pharmacogenetics studies in various clinical settings may lead to personalized medicine. Herein, we present the most important concepts of this discipline, as well as currently-used study methods.
Resumo:
La formation continue fait à l'évidence partie intégrante de la vie du médecin, elle est non seulement un devoir éthique envers les patients mais également l'expression du besoin de se maintenir «à la page» dans sa pratique quotidienne, conséquence des progrès rapides en médecine, particulièrement en oncologie médicale. Elle peut être également source de plaisir quand il s'agit d'accroître ses connaissances. Ses règles minimales ont été définies depuis plusieurs années par la FMH qui délègue aux sociétés de disciplines son application pratique. En 2008, une révision nécessaire pour différentes raisons a facilité le calcul des crédits. Même si le total des heures de formation est resté le même (50 crédits), il a été partagé par deux : 25 pour la formation spécifique et 25 qui peuvent être acquis dans une autre discipline (révision de mars 2009 du Règlement pour la formation continue, art. 5a). Cette révision n'a pas réjoui toutes les sociétés de spécialistes qui gardent la faculté de revoir à la hausse le minimum jugé nécessaire à leur discipline. La quantité des offres de formation continue pour les médecins pose le problème d'être proprement pléthorique (congrès nationaux et internationaux, e-learning, symposiums locaux, etc.), il n'en va pas de même de leur qualité. Dans le domaine de l'oncologie médicale, les offres sont abondantes dans un contexte de marketing évident : les maisons pharmaceutiques parrainent des réunions avec un orateur mercenaire, prestigieux si possible, invité à vanter un produit spécifique dans un cycle de présentations en différents lieux de Romandie (avec à chaque fois, la possibilité d'inscrire des crédits à l'actif des participants)... Elles soutiennent également, par leur logistique, de miniconférences organisées par les différentes institutions locales et auxquelles les médecins ne participent que de façon sporadique vu leur intérêt souvent très secondaire - il n'est pas rare que l'auditoire médical se résume à cinq ou dix participants. Au final, ces offres dispersées et de qualité discutable monopolisent les ressources qui se raréfient rapidement dans le contexte économique actuel et qui doivent impérativement être utilisées de manière plus judicieuse, notamment en évitant les manifestations répétitives. Devant toutes ces offres, il est souvent difficile pour la société de discipline de séparer le bon grain de l'ivraie et en conséquence d'attribuer de manière objective les crédits de formation. Partant de ce constat, un petit groupe romand de médecins oncologues praticiens installés et des centres universitaires ont réfléchi à l'idée de regrouper au sein d'une seule structure romande l'organisation d'une formation continue qui réponde à la fois aux besoins et à l'exigence de qualité. Ses tâches sont multiples : mettre sur pied annuellement plusieurs demi-journées de formation, préaviser avec un comité scientifique de la qualité de la formation continue distillée sur son territoire de compétence (sans empiéter sur les prérogatives de la commission pour la formation postgraduée de la Société suisse d'oncologie médicale - SSOM) en rapprochant les centres universitaires, les hôpitaux cantonaux et régionaux, et les praticiens. Ainsi est née l'association FoROMe (Formation romande en oncologie médicale). Sa légitimité a été établie par la SSOM et par le Comité pour la formation postgraduée et continue (nouvellement SIWF) de la FMH. Elle est maintenant en mesure de mettre en application les tâches pour lesquelles elle a été constituée. Il est évident que cela n'ira pas sans résistance et que certains diront qu'ils ne voient pas la nécessité d'une structure supplémentaire, que les sociétés de disciplines font très bien leur travail, qu'il s'agit encore là d'une atteinte à la liberté. Cependant les nécessités économiques vont tôt ou tard venir au secours de la logique pour confirmer les changements que cette démarche a permis d'anticiper. A l'avenir, il s'agira d'assurer le bien-fondé de cette initiative et de rester vigilant au bon fonctionnement de cette structure à la satisfaction de nos membres.
Resumo:
Understanding how communities of living organisms assemble has been a central question in ecology since the early days of the discipline. Disentangling the different processes involved in community assembly is not only interesting in itself but also crucial for an understanding of how communities will behave under future environmental scenarios. The traditional concept of assembly rules reflects the notion that species do not co-occur randomly but are restricted in their co-occurrence by interspecific competition. This concept can be redefined in a more general framework where the co-occurrence of species is a product of chance, historical patterns of speciation and migration, dispersal, abiotic environmental factors, and biotic interactions, with none of these processes being mutually exclusive. Here we present a survey and meta-analyses of 59 papers that compare observed patterns in plant communities with null models simulating random patterns of species assembly. According to the type of data under study and the different methods that are applied to detect community assembly, we distinguish four main types of approach in the published literature: species co-occurrence, niche limitation, guild proportionality and limiting similarity. Results from our meta-analyses suggest that non-random co-occurrence of plant species is not a widespread phenomenon. However, whether this finding reflects the individualistic nature of plant communities or is caused by methodological shortcomings associated with the studies considered cannot be discerned from the available metadata. We advocate that more thorough surveys be conducted using a set of standardized methods to test for the existence of assembly rules in data sets spanning larger biological and geographical scales than have been considered until now. We underpin this general advice with guidelines that should be considered in future assembly rules research. This will enable us to draw more accurate and general conclusions about the non-random aspect of assembly in plant communities.
Resumo:
We conducted a preliminary, questionnaire-based, retrospective analysis of training and injury in British National Squad Olympic distance (OD) and Ironman distance (IR) triathletes. The main outcome measures were training duration and training frequency and injury frequency and severity. The number of overuse injuries sustained over a 5-year period did not differ between OD and IR. However, the proportions of OD and IR athletes who were affected by injury to particular anatomical sites differed (p < 0.05). Also, fewer OD athletes (16.7 vs. 36.8%, p < 0.05) reported that their injury recurred. Although OD sustained fewer running injuries than IR (1.6 +/- 0.5 vs. 1.9 +/- 0.3, p < 0.05), more subsequently stopped running (41.7 vs. 15.8%) and for longer (33.5 +/- 43.0 vs. 16.7 +/- 16.6 days, p < 0.01). In OD, the number of overuse injuries sustained inversely correlated with percentage training time, and number of sessions, doing bike hill repetitions (r = -0.44 and -0.39, respectively, both p < 0.05). The IR overuse injury number correlated with the amount of intensive sessions done (r = 0.67, p < 0.01 and r = 0.56, p < 0.05 for duration of "speed run" and "speed bike" sessions). Coaches should note that training differences between triathletes who specialize in OD or IR competition may lead to their exhibiting differential risk for injury to specific anatomical sites. It is also important to note that cycle and run training may have a "cumulative stress" influence on injury risk. Therefore, the tendency of some triathletes to modify rather than stop training when injured-usually by increasing load in another discipline from that in which the injury first occurred-may increase both their risk of injury recurrence and time to full rehabilitation.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
Interaction analysis is not a prerogative of any discipline in social sciences. It has its own history within each disciplinary field and is related to specific research objects. From the standpoint of psychology, this article first draws upon a distinction between factorial and dialogical conceptions of interaction. It then briefly presents the basis of a dialogical approach in psychology and focuses upon four basic assumptions. Each of them is examined on a theoretical and on a methodological level with a leading question: to what extent is it possible to develop analytical tools that are fully coherent with dialogical assumptions? The conclusion stresses the difficulty of developing methodological tools that are fully consistent with dialogical assumptions and argues that there is an unavoidable tension between accounting for the complexity of an interaction and using methodological tools which necessarily "monologise" this complexity.
Resumo:
The discipline of Enterprise Architecture Management (EAM) deals with the alignment of business and information systems architectures. While EAM has long been regarded as a discipline for IT managers this book takes a different stance: It explains how top executives can use EAM for leveraging their strategic planning and controlling processes and how EAM can contribute to sustainable competitive advantage. Based on the analysis of best practices from eight leading European companies from various industries the book presents crucial elements of successful EAM. It outlines what executives need to do in terms of governance, processes, methodologies and culture in order to bring their management to the next level. Beyond this, the book points how EAM might develop in the next decade allowing today's managers to prepare for the future of architecture management.
Resumo:
The use of ecological momentary assessment (EMA) for studying parenting has been rare. We examined the psychometric properties and structural validity of an EMA Parenting Scale based on 32 mothers' reports of their parenting over a period of 10 consecutive days, and explored the acceptance of the scale and compliance with the procedure. The results suggested that the EMA Parenting Scale was well accepted for the assessment of daily parenting, and that it consistently captured the overreactive and lax dimensions of parenting across different episodes of child misbehavior. Moreover, multilevel analyses suggested that the scale was sensitive to change across different parenting episodes, and that it reliably assessed the dimensions at both the personal and situational levels. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Resumo:
The International Society for Clinical Densitometry (ISCD) and the International Osteoporosis Foundation (IOF) convened the FRAX(®) Position Development Conference (PDC) in Bucharest, Romania, on November 14, 2010, following a two-day joint meeting of the ISCD and IOF on the "Interpretation and Use of FRAX(®) in Clinical Practice." These three days of critical discussion and debate, led by a panel of international experts from the ISCD, IOF and dedicated task forces, have clarified a number of important issues pertaining to the interpretation and implementation of FRAX(®) in clinical practice. The Official Positions resulting from the PDC are intended to enhance the quality and clinical utility of fracture risk assessment worldwide. Since the field of skeletal assessment is still evolving rapidly, some clinically important issues addressed at the PDCs are not associated with robust medical evidence. Accordingly, some Official Positions are based largely on expert opinion. Despite limitations inherent in such a process, the ISCD and IOF believe it is important to provide clinicians and technologists with the best distillation of current knowledge in the discipline of bone densitometry and provide an important focus for the scientific community to consider. This report describes the methodology and results of the ISCD-IOF PDC dedicated to FRAX(®).