958 resultados para Check-In
Resumo:
Background - Not only is compulsive checking the most common symptom in Obsessive Compulsive Disorder (OCD) with an estimated prevalence of 50–80% in patients, but approximately ~15% of the general population reveal subclinical checking tendencies that impact negatively on their performance in daily activities. Therefore, it is critical to understand how checking affects attention and memory in clinical as well as subclinical checkers. Eye fixations are commonly used as indicators for the distribution of attention but research in OCD has revealed mixed results at best. Methodology/Principal Finding - Here we report atypical eye movement patterns in subclinical checkers during an ecologically valid working memory (WM) manipulation. Our key manipulation was to present an intermediate probe during the delay period of the memory task, explicitly asking for the location of a letter, which, however, had not been part of the encoding set (i.e., misleading participants). Using eye movement measures we now provide evidence that high checkers’ inhibitory impairments for misleading information results in them checking the contents of WM in an atypical manner. Checkers fixate more often and for longer when misleading information is presented than non-checkers. Specifically, checkers spend more time checking stimulus locations as well as locations that had actually been empty during encoding. Conclusions/Significance - We conclude that these atypical eye movement patterns directly reflect internal checking of memory contents and we discuss the implications of our findings for the interpretation of behavioural and neuropsychological data. In addition our results highlight the importance of ecologically valid methodology for revealing the impact of detrimental attention and memory checking on eye movement patterns.
Resumo:
Ambulatory electroencephalogram has been used for differentiating epileptic from nonepileptic events, recording seizure frequency and classification of seizure type. We studied 100 consecutive children prospectively aged 11 days to 16 years that were referred for an ambulatory electroencephalogram to a regional children's hospital. Ambulatory electroencephalogram was clinically useful in contributing to a clinical diagnosis in 71% of children who were referred with a range of clinical questions. A diagnosis of epileptic disorder was confirmed by obtaining an ictal record in 26% and this included 11 children that had previously normal awake and or sleep electroencephalogram. We recommend making a telephone check of the current target event frequency and prioritising those with typical events on most days in order to improve the frequency of recording a typical attack.
Resumo:
We evaluated inter-individual variability in optimal current direction for biphasic transcranial magnetic stimulation (TMS) of the motor cortex. Motor threshold for first dorsal interosseus was detected visually at eight coil orientations in 45° increments. Each participant (n = 13) completed two experimental sessions. One participant with low test–retest correlation (Pearson's r < 0.5) was excluded. In four subjects, visual detection of motor threshold was compared to EMG detection; motor thresholds were very similar and highly correlated (0.94–0.99). Similar with previous studies, stimulation in the majority of participants was most effective when the first current pulse flowed towards postero-lateral in the brain. However, in four participants, the optimal coil orientation deviated from this pattern. A principal component analysis using all eight orientations suggests that in our sample the optimal orientation of current direction was normally distributed around the postero-lateral orientation with a range of 63° (S.D. = 13.70°). Whenever the intensity of stimulation at the target site is calculated as a percentage from the motor threshold, in order to minimize intensity and side-effects it may be worthwhile to check whether rotating the coil 45° from the traditional posterior–lateral orientation decreases motor threshold.
Resumo:
This article is based on a case study carried out in a small inner-city primary school in the English south midlands. The key determinant of the research was to examine the factors affecting the progress of children in the school, assess the school's response and to make recommendations that would enhance good practice and undertake responsibilities under the Race Relations Act (2000). The focal point was children for whom English is an additional language (EAL). This article considers the relevance of such a study in gathering the views of EAL and minority ethnic parents, carers and professionals and how far it could be utilized by any school as part of a regular check to determine how well it is providing for their children.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Resumo:
We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
Transition P Systems are a parallel and distributed computational model based on the notion of the cellular membrane structure. Each membrane determines a region that encloses a multiset of objects and evolution rules. Transition P Systems evolve through transitions between two consecutive configurations that are determined by the membrane structure and multisets present inside membranes. Moreover, transitions between two consecutive configurations are provided by an exhaustive non-deterministic and parallel application of evolution rules. But, to establish the rules to be applied, it is required the previous calculation of useful, applicable and active rules. Hence, computation of useful evolution rules is critical for the whole evolution process efficiency, because it is performed in parallel inside each membrane in every evolution step. This work defines usefulness states through an exhaustive analysis of the P system for every membrane and for every possible configuration of the membrane structure during the computation. Moreover, this analysis can be done in a static way; therefore membranes only have to check their usefulness states to obtain their set of useful rules during execution.
Resumo:
In this article I argue that the study of the linguistic aspects of epistemology has become unhelpfully focused on the corpus-based study of hedging and that a corpus-driven approach can help to improve upon this. Through focusing on a corpus of texts from one discourse community (that of genetics) and identifying frequent tri-lexical clusters containing highly frequent lexical items identified as keywords, I undertake an inductive analysis identifying patterns of epistemic significance. Several of these patterns are shown to be hedging devices and the whole corpus frequencies of the most salient of these, candidate and putative, are then compared to the whole corpus frequencies for comparable wordforms and clusters of epistemic significance. Finally I interviewed a ‘friendly geneticist’ in order to check my interpretation of some of the terms used and to get an expert interpretation of the overall findings. In summary I argue that the highly unexpected patterns of hedging found in genetics demonstrate the value of adopting a corpus-driven approach and constitute an advance in our current understanding of how to approach the relationship between language and epistemology.
Resumo:
Objective: The objective of the study is to explore preferences of gastroenterologists for biosimilar drugs in Crohn’s Disease and reveal trade-offs between the perceived risks and benefits related to biosimilar drugs. Method: Discrete choice experiment was carried out involving 51 Hungarian gastroenterologists in May, 2014. The following attributes were used to describe hypothetical choice sets: 1) type of the treatment (biosimilar/originator) 2) severity of disease 3) availability of continuous medicine supply 4) frequency of the efficacy check-ups. Multinomial logit model was used to differentiate between three attitude types: 1) always opting for the originator 2) willing to consider biosimilar for biological-naïve patients only 3) willing to consider biosimilar treatment for both types of patients. Conditional logit model was used to estimate the probabilities of choosing a given profile. Results: Men, senior consultants, working in IBD center and treating more patients are more likely to willing to consider biosimilar for biological-naïve patients only. Treatment type (originator/biosimilar) was the most important determinant of choice for patients already treated with biologicals, and the availability of continuous medicine supply in the case biological-naïve patients. The probabilities of choosing the biosimilar with all the benefits offered over the originator under current reimbursement conditions are 89% vs 11% for new patients, and 44% vs 56% for patients already treated with biological. Conclusions: Gastroenterologists were willing to trade between perceived risks and benefits of biosimilars. The continuous medical supply would be one of the major benefits of biosimilars. However, benefits offered in the scenarios do not compensate for the change from the originator to the biosimilar treatment of patients already treated with biologicals.
Resumo:
The effects of lead exposure may endure through one's lifetime and can negatively effect educational performance. While the link between the cause and effects of lead poisoning has been identified, the application of lead health education as the mechanism of disease prevention has not. The purpose of this study was to examine whether caregiver participation in a family-based educational intervention can result in decreased lead exposure in low socioeconomic children. ^ Participants (n = 50) were caregivers of children 12 to 36 months of age. They were randomly selected from an urban clinic and randomly assigned to either a treatment or control group. The experimental design of this study involved two clinic visits. Parents in the treatment group were given the educational intervention during the first clinic visit while those in the control group were given the intervention during the second clinic visit. The intervention was reinforced with a lead education brochure coupled with a video on childhood lead poisoning. One instrument was used to test parental knowledge of lead poisoning both pre- and post-intervention. Blood lead levels in pediatric participants were tested using two blood lead screens approximately three to four months apart determined by well-child check-up schedules. ^ Findings from the analysis of variance showed the interaction between the change in blood lead level between the children's first and second clinic visits and the treatment level. This demonstrated a significant interaction between the differences of first and second clinic visits blood lead levels and the presence or absence of the educational intervention. ^ The findings from an analysis of covariance support that caregivers in the treatment group have significantly higher scores on the second clinic visit scores on the CLKT than the caregivers in the control group. These data suggest that the educational treatment is effective in increasing the knowledge of caregivers about the dangers of lead poisoning and the strategies for lead poisoning prevention. ^ Conclusions indicate that the education of adult caregivers can affect blood lead levels of children, the educational treatment increased the knowledge of caregivers, caregivers were able to carry out procedures taught, and caregivers retained knowledge over time. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
Deception research has traditionally focused on three methods of identifying liars and truth tellers: observing non-verbal or behavioral cues, analyzing verbal cues, and monitoring changes in physiological arousal during polygraph tests. Research shows that observers are often incapable of discriminating between liars and truth tellers with better than chance accuracy when they use these methods. One possible explanation for observers' poor performance is that they are not properly applying existing lie detection methods. An alternative explanation is that the cues on which these methods — and observers' judgments — are based do not reliably discriminate between liars and truth tellers. It may be possible to identify more reliable cues, and potentially improve observers' ability to discriminate, by developing a better understanding of how liars and truth tellers try to tell a convincing story. ^ This research examined (a) the verbal strategies used by truthful and deceptive individuals during interviews concerning an assigned activity, and (b) observers' ability to discriminate between them based on their verbal strategies. In Experiment I, pre-interview instructions manipulated participants' expectations regarding verifiability; each participant was led to believe that the interviewer could check some types of details, but not others, before deciding whether the participant was being truthful or deceptive. Interviews were then transcribed and scored for quantity and type of information provided. In Experiment II, observers listened to a random sample of the Experiment I interviews and rendered veracity judgments; half of the observers were instructed to judge the interviews according to the verbal strategies used by liars and truth tellers and the other half were uninstructed. ^ Results of Experiment I indicate that liars and truth tellers use different verbal strategies, characterized by a differential amount of detail. Overall, truthful participants provided more information than deceptive participants. This effect was moderated by participants' expectations regarding verifiability such that truthful participants provided more information only with regard to verifiable details. Results of Experiment II indicate that observers instructed about liars' and truth tellers' verbal strategies identify them with greater accuracy than uninstructed observers. ^
Resumo:
In the discussion - Indirect Cost Factors in Menu Pricing – by David V. Pavesic, Associate Professor, Hotel, Restaurant and Travel Administration at Georgia State University, Associate Professor Pavesic initially states: “Rational pricing methodologies have traditionally employed quantitative factors to mark up food and beverage or food and labor because these costs can be isolated and allocated to specific menu items. There are, however, a number of indirect costs that can influence the price charged because they provide added value to the customer or are affected by supply/demand factors. The author discusses these costs and factors that must be taken into account in pricing decisions. Professor Pavesic offers as a given that menu pricing should cover costs, return a profit, reflect a value for the customer, and in the long run, attract customers and market the establishment. “Prices that are too high will drive customers away, and prices that are too low will sacrifice profit,” Professor Pavesic puts it succinctly. To dovetail with this premise the author provides that although food costs measure markedly into menu pricing, other factors such as equipment utilization, popularity/demand, and marketing are but a few of the parenthetic factors also to be considered. “… there is no single method that can be used to mark up every item on any given restaurant menu. One must employ a combination of methodologies and theories,” says Professor Pavesic. “Therefore, when properly carried out, prices will reflect food cost percentages, individual and/or weighted contribution margins, price points, and desired check averages, as well as factors driven by intuition, competition, and demand.” Additionally, Professor Pavesic wants you to know that value, as opposed to maximizing revenue, should be a primary motivating factor when designing menu pricing. This philosophy does come with certain caveats, and he explains them to you. Generically speaking, Professor Pavesic says, “The market ultimately determines the price one can charge.” But, in fine-tuning that decree he further offers, “Lower prices do not automatically translate into value and bargain in the minds of the customers. Having the lowest prices in your market may not bring customers or profit. “Too often operators engage in price wars through discount promotions and find that profits fall and their image in the marketplace is lowered,” Professor Pavesic warns. In reference to intangibles that influence menu pricing, service is at the top of the list. Ambience, location, amenities, product [i.e. food] presentation, and price elasticity are discussed as well. Be aware of price-value perception; Professor Pavesic explains this concept to you. Professor Pavesic closes with a brief overview of a la carte pricing; its pros and cons.
Resumo:
The purpose of this research was to gain an understanding of the study experience of non-American graduate students living outside of the United States and formally engaged in graduate studies in an American Distance Education (DE) Program. These students have been labeled “culturally sensitive.” The nature of this study dictated a qualitative case study methodology using in-depth interviews to collect the data and the hermeneutic approach to understanding and description. This study aims at generating questions and hypotheses that will lead to further investigations that explore the need for cultural and contextual sensitivity in order to provide more equitable and accessible higher education for all. ^ The study attempted to answer the question: What is the study experience of “culturally sensitive” graduate students in American DE Programs? The underlying issue in this study is whether education designed and provided by educators of different socio-cultural backgrounds from that of the students could be content relevant and instructionally appropriate, resulting in educational enhancement and/or prepare students to function adequately in their own communities. ^ Participants in this study (n = 12) were engaged in Master's level (n = 2) and Doctoral level (n = 10) DE programs at American Universities, and were interviewed by E-mail, face-to-face, or using a combination of the two. Data analysis compared interviews and highlighted repetitive patterns. Interview data was triangulated with recent related literature and data from document reviews of archived E-mail conversations between students and their professors. The patterns that emerged were coded and categorized according to generative themes. The following themes were identified in order to analyze the data and confirmed through participant check-back: program benefits, communication, technology, culture and methodology, and reflectivity. ^ Major findings in this study indicate that culture plays an important role in cross-cultural encounters for students in American DE programs vis-à-vis student perceptions as to whether their study needs were being met. Most notably, it was found that the coupling of cultural perceptual differences with transactional distance created a potential barrier to communication that could affect short-term success in American DE programs. To overcome this barrier, students cited good communication as essential in meeting student's needs, especially those communications that were supportive and full of detail and context and from a primary source (ex. directly from the professor). Evaluation was a particularly sensitive issue, especially when students were unaware of their professor's cultural and contextual intricacies and therefore were uncertain about expectations and intended meaning. CSGS were aware of their position and the American rather than global context in which they were participating. Students appear to have developed “extended identities”, meaning that they acculturated in varying degrees in order to be successful in their program but that their local cultural identity was not compromised in any way. For participants from Venezuela access to higher DE has been a limiting factor to participation, due to the high cost of technology and telephone lines for communication. ^