850 resultados para just-about-right scale
Resumo:
Objectives - The absence of pathophysiologically relevant diagnostic markers of bipolar disorder (BD) leads to its frequent misdiagnosis as unipolar depression (UD). We aimed to determine whether whole brain white matter connectivity differentiated BD from UD depression. Methods - We employed a three-way analysis of covariance, covarying for age, to examine whole brain fractional anisotropy (FA), and corresponding longitudinal and radial diffusivity, in currently depressed adults: 15 with BD-type I (mean age 36.3 years, SD 12.0 years), 16 with recurrent UD (mean age 32.3 years, SD 10.0 years), and 24 healthy control adults (HC) (mean age 29.5 years, SD 9.43 years). Depressed groups did not differ in depression severity, age of illness onset, and illness duration. Results - There was a main effect of group in left superior and inferior longitudinal fasciculi (SLF and ILF) (all F = 9.8; p = .05, corrected). Whole brain post hoc analyses (all t = 4.2; p = .05, corrected) revealed decreased FA in left SLF in BD, versus UD adults in inferior temporal cortex and, versus HC, in primary sensory cortex (associated with increased radial and decreased longitudinal diffusivity, respectively); and decreased FA in left ILF in UD adults versus HC. A main effect of group in right uncinate fasciculus (in orbitofrontal cortex) just failed to meet significance in all participants but was present in women. Post hoc analyses revealed decreased right uncinate fasciculus FA in all and in women, BD versus HC. Conclusions - White matter FA in left occipitotemporal and primary sensory regions supporting visuospatial and sensory processing differentiates BD from UD depression. Abnormally reduced FA in right fronto-temporal regions supporting mood regulation, might underlie predisposition to depression in BD. These measures might help differentiate pathophysiologic processes of BD versus UD depression.
Resumo:
This article examines the development and impact of German citizenship policy over the past decade. As its point of departure, it takes the 2000 Citizenship Law, which sought to undertake a full-scale reform and liberalisation of access to German membership. The article discusses this law’s content and subsequent amendments, focusing particularly on its quantitative impact, asking why the number of naturalisations has been lower than originally expected. The article outlines current challenges to the law’s structure operation and identifies potential trajectories for its future development.
Resumo:
Humans imitate biological movements faster than non-biological movements. The faster response has been attributed to an activation of the human mirror neuron system, which is thought to match observation and execution of actions. However, it is unclear which cortical areas are responsible for this behavioural advantage. Also, little is known about the timing of activations. Using whole-head magnetoencephalography we recorded neuronal responses to single biological finger movements and non-biological dot movements while the subjects were required to perform an imitation task or an observation task, respectively. Previous imaging studies on the human mirror neurone system suggested that activation in response to biological movements would be stronger in ventral premotor, parietal and superior temporal regions. In accordance with previous studies, reaction times to biological movements were faster than those to dot movements in all subjects. The analysis of evoked magnetic fields revealed that the reaction time benefit was paralleled by stronger and earlier activation of the left temporo-occipital cortex, right superior temporal area and right ventral motor/premotor area. The activity patterns suggest that the latter areas mediate the observed behavioural advantage of biological movements and indicate a predominant contribution of the right temporo-frontal hemisphere to action observation–execution matching processes in intransitive movements, which has not been reported previously.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
Purpose The purpose of this paper is to identify some of the dilemmas involved in the debate on the how, when and why of mixed methods research. Design/methodology/approach The authors' starting point is formed by developments in the philosophy of science literature, and recent publications on mixed methods research outside of the management accounting domain. Findings Contrary to recent claims made in the management accounting literature, the authors assert that uncovering points of disagreement between methods may be as far as researchers can go by combining them. Being reflexive can help to provide a deeper understanding of the research process and the researcher's role in this process. Research limitations/implications The paper should extend the debate among management accounting researchers about mixed methods research. One of the lessons drawn is that researchers are actively immersed in the research process and cannot purge their own interests and views. Accepting this lesson casts doubt on what the act of research may imply and achieve. Practical implications The paper shows that combinations of research methods should not be made based on a "whatever works" attitude, since this approach ultimately is still infused with ontological and epistemological considerations that researchers have, and should try to explicate. Originality/value The value of this paper lies in the provision of philosophical underpinnings that have not been widely considered in the management accounting literature on mixed methods to date. © 2011 Emerald Group Publishing Limited. All rights reserved.
Resumo:
In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book helps people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. In less than a decade, personal computers have become part of our daily lives. Many of us come into contact with computers every day, whether at work, school or home. As useful as the new technologies are, they also have a darker side. By making computers part of our daily lives, we run the risk of allowing thieves, swindlers, and all kinds of deviants directly into our homes. Armed with a personal computer, a modem and just a little knowledge, a thief can easily access confidential information, such as details of bank accounts and credit cards. This book is intended to help people avoid harm at the hands of Internet criminals. It offers a tour of the more dangerous parts of the Internet, as the author explains who the predators are, their motivations, how they operate and how to protect against them. Behind the doors of our own homes, we assume we are safe from predators, con artists, and other criminals wishing us harm. But the proliferation of personal computers and the growth of the Internet have invited these unsavory types right into our family rooms. With a little psychological knowledge a con man can start to manipulate us in different ways. A terrorist can recruit new members and raise money over the Internet. Identity thieves can gather personal information and exploit it for criminal purposes. Spammers can wreak havoc on businesses and individuals. Here, an expert helps readers recognize the signs of a would-be criminal in their midst. Focusing on the perpetrators, the author provides information about how they operate, why they do it, what they hope to do, and how to protect yourself from becoming a victim.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
This article discusses the question of compositionality by examining whether the indiscriminacy reading of the collocation of just with any can be shown to be a consequence of the schematic meaning-potential of each of these two items. A comparison of justwith other restrictive focus particles allows its schematic meaning to be defined as that of goodness of fit. Any is defined as representing an indefinite member of a set as extractable from the set in exactly the same way as each of the other members thereof. The collocation just any often gives rise to a scalar reading oriented towards the lowest value on the scale due to the fact that focus on the unconstrained extractability of a random indefinite item brings into consideration even marginal cases and the latter tend to be interpreted as situated on the lower end of the scale. The attention to low-end values also explains why just any is regularly found with the adjective old, the prepositional phrase at all and various devaluating expressions. It is concluded that the meanings of the component parts of this collocation do indeed account for the meaning of the whole, and that an appropriate methodology allows identification of linguistic meanings and their interrelations. © 2011 Elsevier B.V.
Resumo:
Background: Adherence to treatment is often reported to be low in children with cystic fibrosis. Adherence in cystic fibrosis is an important research area and more research is needed to better understand family barriers to adherence in order for clinicians to provide appropriate intervention. The aim of this study was to evaluate adherence to enzyme supplements, vitamins and chest physiotherapy in children with cystic fibrosis and to determine if any modifiable risk factors are associated with adherence. Methods: A sample of 100 children (≤18 years) with cystic fibrosis (44 male; median [range] 10.1 [0.2-18.6] years) and their parents were recruited to the study from the Northern Ireland Paediatric Cystic Fibrosis Centre. Adherence to enzyme supplements, vitamins and chest physiotherapy was assessed using a multi-method approach including; Medication Adherence Report Scale, pharmacy prescription refill data and general practitioner prescription issue data. Beliefs about treatments were assessed using refined versions of the Beliefs about Medicines Questionnaire-specific. Parental depressive symptoms were assessed using the Center for Epidemiologic Studies Depression Scale. Results: Using the multi-method approach 72% of children were classified as low-adherers to enzyme supplements, 59% low-adherers to vitamins and 49% low-adherers to chest physiotherapy. Variations in adherence were observed between measurement methods, treatments and respondents. Parental necessity beliefs and child age were significant independent predictors of child adherence to enzyme supplements and chest physiotherapy, but parental depressive symptoms were not found to be predictive of adherence. Conclusions: Child age and parental beliefs about treatments should be taken into account by clinicians when addressing adherence at routine clinic appointments. Low adherence is more likely to occur in older children, whereas, better adherence to cystic fibrosis therapies is more likely in children whose parents strongly believe the treatments are necessary. The necessity of treatments should be reinforced regularly to both parents and children.
Resumo:
Decision making and technical decision analysis demand computer-aided techniques and therefore more and more support by formal techniques. In recent years fuzzy decision analysis and related techniques gained importance as an efficient method for planning and optimization applications in fields like production planning, financial and economical modeling and forecasting or classification. It is also known, that the hierarchical modeling of the situation is one of the most popular modeling method. It is shown, how to use the fuzzy hierarchical model in complex with other methods of Multiple Criteria Decision Making. We propose a novel approach to overcome the inherent limitations of Hierarchical Methods by exploiting multiple criteria decision making.
Resumo:
Ironically, the “learning of percent” is one of the most problematic aspects of school mathematics. In our view, these difficulties are not associated with the arithmetic aspects of the “percent problems”, but mostly with two methodological issues: firstly, providing students with a simple and accurate understanding of the rationale behind the use of percent, and secondly - overcoming the psychological complexities of the fluent and comprehensive understanding by the students of the sometimes specific wordings of “percent problems”. Before we talk about percent, it is necessary to acquaint students with a much more fundamental and important (regrettably, not covered by the school syllabus) classical concepts of quantitative and qualitative comparison of values, to give students the opportunity to learn the relevant standard terminology and become accustomed to conventional turns of speech. Further, it makes sense to briefly touch on the issue (important in its own right) of different representations of numbers. Percent is just one of the technical, but common forms of data representation: p% = p × % = p × 0.01 = p × 1/100 = p/100 = p × 10-2 "Percent problems” are involved in just two cases: I. The ratio of a variation m to the standard M II. The relative deviation of a variation m from the standard M The hardest and most essential in each specific "percent problem” is not the routine arithmetic actions involved, but the ability to figure out, to clearly understand which of the variables involved in the problem instructions is the standard and which is the variation. And in the first place, this is what teachers need to patiently and persistently teach their students. As a matter of fact, most primary school pupils are not yet quite ready for the lexical specificity of “percent problems”. ....Math teachers should closely, hand in hand with their students, carry out a linguistic analysis of the wording of each problem ... Schoolchildren must firmly understand that a comparison of objects is only meaningful when we speak about properties which can be objectively expressed in terms of actual numerical characteristics. In our opinion, an adequate acquisition of the teaching unit on percent cannot be achieved in primary school due to objective psychological specificities related to this age and because of the level of general training of students. Yet, if we want to make this topic truly accessible and practically useful, it should be taught in high school. A final question to the reader (quickly, please): What is greater: % of e or e% of Pi
Resumo:
GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.
Resumo:
Starting with a description of the software and hardware used for corpus linguistics in the late 1980s to early 1990s, this contribution discusses difficulties faced by the software designer when attempting to allow users to study text. Future human-machine interfaces may develop to be much more sophisticated, and certainly the aspects of text which can be studied will progress beyond plain text without images. Another area which will develop further is the study of patternings involving not just single words but word-relations across large stretches of text.
Resumo:
The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).
Resumo:
Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.