944 resultados para problems with object-oriented paradigm
Resumo:
BACKGROUND Cognitive problems can have a negative effect on a person's education, but little is known about cognitive problems in young childhood cancer survivors (survivors). This study compared cognitive problems between survivors and their siblings, determined if cognitive problems decreased during recent treatment periods and identified characteristics associated with the presence of a cognitive problem in survivors. METHODS As part of the Swiss Childhood Cancer Survivor Study, a questionnaire was sent to all survivors, aged 8-20 years, registered in the Swiss Childhood Cancer Registry, diagnosed at age <16 years, who had survived ≥5 years. Parent-reported (aged 8-15 years) and self-reported (aged 16-20 years) cognitive problems (concentration, working speed, memory) were compared between survivors and siblings. Multivariable logistic regression was used to identify characteristics associated with cognitive problems in survivors. RESULTS Data from 840 survivors and 247 siblings were analyzed. More often than their siblings, survivors reported problems with concentration (12% vs. 6%; P = 0.020), slow working speed (20% vs. 8%; P = 0.001) or memory (33% vs. 15%; P < 0.001). Survivors from all treatment periods were more likely to report a cognitive problem than were siblings. Survivors of CNS tumors (OR = 2.82 compared to leukemia survivors, P < 0.001) and those who had received cranial irradiation (OR = 2.10, P = 0.010) were most severely affected. CONCLUSION Childhood cancer survivors, even those treated recently (2001-2005), remain at risk to develop cognitive problems, suggesting a need to improve therapies. Survivors with cognitive problems should be given the opportunity to enter special education programs. Pediatr Blood Cancer © 2014 Wiley Periodicals, Inc.
Resumo:
Introduction: Over the last decades, Swiss sports clubs have lost their "monopoly" in the market for sports-related services and increasingly are in competition with other sports providers. For many sport clubs long-term membership cannot be seen as a matter of course. Current research on sports clubs in Switzerland – as well as for other European countries – confirms the increasing difficulties in achieving long-term member commitment. Looking at recent findings of the Swiss sport clubs report (Lamprecht, Fischer & Stamm, 2012), it can be noted, that a decrease in memberships does not equally affect all clubs. There are sports clubs – because of their specific situational and structural conditions – that have few problems with member fluctuation, while other clubs show considerable declines in membership. Therefore, a clear understanding of individual and structural factors that trigger and sustain member commitment would help sports clubs to tackle this problem more effectively. This situation poses the question: What are the individual and structural determinants that influence the tendency to continue or to quit the membership? Methods: Existing research has extensively investigated the drivers of members’ commitment at an individual level. As commitment of members usually occurs within an organizational context, the characteristics of the organisation should be also considered. However, this context has been largely neglected in current research. This presentation addresses both the individual characteristics of members and the corresponding structural conditions of sports clubs resulting in a multi-level framework for the investigation of the factors of members’ commitment in sports clubs. The multilevel analysis grant a adequate handling of hierarchically structured data (e.g., Hox, 2002). The influences of both the individual and context level on the stability of memberships are estimated in multi-level models based on a sample of n = 1,434 sport club members from 36 sports clubs. Results: Results of these multi-level analyses indicate that commitment of members is not just an outcome of individual characteristics, such as strong identification with the club, positively perceived communication and cooperation, satisfaction with sports clubs’ offers, or voluntary engagement. It is also influenced by club-specific structural conditions: stable memberships are more probable in rural sports clubs, and in clubs that explicitly support sociability, whereas sporting-success oriented goals in clubs have a destabilizing effect. Discussion/Conclusion: The proposed multi-level framework and the multi-level analysis can open new perspectives for research concerning commitment of members to sports clubs and other topics and problems of sport organisation research, especially in assisting to understand individual behavior within organizational contexts. References: Hox, J. J. (2002). Multilevel analysis: Techniques and applications. Mahwah: Lawrence Erlbaum. Lamprecht, M., Fischer, A., & Stamm, H.-P. (2012). Die Schweizer Sportvereine – Strukturen, Leistungen, Herausforderungen. Zurich: Seismo.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
Answering run-time questions in object-oriented systems involves reasoning about and exploring connections between multiple objects. Developer questions exercise various aspects of an object and require multiple kinds of interactions depending on the relationships between objects, the application domain and the differing developer needs. Nevertheless, traditional object inspectors, the essential tools often used to reason about objects, favor a generic view that focuses on the low-level details of the state of individual objects. This leads to an inefficient effort, increasing the time spent in the inspector. To improve the inspection process, we propose the Moldable Inspector, a novel approach for an extensible object inspector. The Moldable Inspector allows developers to look at objects using multiple interchangeable presentations and supports a workflow in which multiple levels of connecting objects can be seen together. Both these aspects can be tailored to the domain of the objects and the question at hand. We further exemplify how the proposed solution improves the inspection process, introduce a prototype implementation and discuss new directions for extending the Moldable Inspector.
Resumo:
Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Resumo:
Polymorphism, along with inheritance, is one of the most important features in object-oriented languages, but it is also one of the biggest obstacles to source code comprehension. Depending on the run-time type of the receiver of a message, any one of a number of possible methods may be invoked. Several algorithms for creating accurate call-graphs using static analysis already exist, however, they consume significant time and memory resources. We propose an approach that will combine static and dynamic analysis and yield the best possible precision with a minimal trade-off between used resources and accuracy.
Resumo:
Synchronizing mind maps with fuzzy cognitive maps can help to handle complex problems with many involved stakeholders by taking advantage of human creativity. The proposed approach has the capacity to instantiate cognitive cities by including cognitive computing. A use case in the context of decision-finding (concerning a transportation system) is presented to illustrate the approach.
Resumo:
We prove exponential rates of convergence of hp-version discontinuous Galerkin (dG) interior penalty finite element methods for second-order elliptic problems with mixed Dirichlet-Neumann boundary conditions in axiparallel polyhedra. The dG discretizations are based on axiparallel, σ-geometric anisotropic meshes of mapped hexahedra and anisotropic polynomial degree distributions of μ-bounded variation. We consider piecewise analytic solutions which belong to a larger analytic class than those for the pure Dirichlet problem considered in [11, 12]. For such solutions, we establish the exponential convergence of a nonconforming dG interpolant given by local L 2 -projections on elements away from corners and edges, and by suitable local low-order quasi-interpolants on elements at corners and edges. Due to the appearance of non-homogeneous, weighted norms in the analytic regularity class, new arguments are introduced to bound the dG consistency errors in elements abutting on Neumann edges. The non-homogeneous norms also entail some crucial modifications of the stability and quasi-optimality proofs, as well as of the analysis for the anisotropic interpolation operators. The exponential convergence bounds for the dG interpolant constructed in this paper generalize the results of [11, 12] for the pure Dirichlet case.
Resumo:
BACKGROUND Implant-overdentures supported by rigid bars provide stability in the edentulous atrophic mandible. However, fractures of solder joints and matrices, and loosening of screws and matrices were observed with soldered gold bars (G-bars). Computer-aided designed/computer-assisted manufactured (CAD/CAM) titanium bars (Ti-bars) may reduce technical complications due to enhanced material quality. PURPOSE To compare prosthetic-technical maintenance service of mandibular implant-overdentures supported by CAD/CAM Ti-bar and soldered G-bar. MATERIALS AND METHODS Edentulous patients were consecutively admitted for implant-prosthodontic treatment with a maxillary complete denture and a mandibular implant-overdenture connected to a rigid G-bar or Ti-bar. Maintenance service and problems with the implant-retention device complex and the prosthesis were recorded during minimally 3-4 years. Annual peri-implant crestal bone level changes (ΔBIC) were radiographically assessed. RESULTS Data of 213 edentulous patients (mean age 68 ± 10 years), who had received a total of 477 tapered implants, were available. Ti-bar and G-bar comprised 101 and 112 patients with 231 and 246 implants, respectively. Ti-bar mostly exhibited distal bar extensions (96%) compared to 34% of G-bar (p < .001). Fracture rate of bars extensions (4.7% vs 14.8%, p < .001) and matrices (1% vs 13%, p < .001) was lower for Ti-bar. Matrices activation was required 2.4× less often in Ti-bar. ΔBIC remained stable for both groups. CONCLUSIONS Implant overdentures supported by soldered gold bars or milled CAD/CAM Ti-bars are a successful treatment modality but require regular maintenance service. These short-term observations support the hypothesis that CAD/CAM Ti-bars reduce technical complications. Fracture location indicated that the titanium thickness around the screw-access hole should be increased.
Resumo:
Background The few studies that have evaluated syntax in autism spectrum disorder (ASD) have yielded conflicting findings: some suggest that once matched on mental age, ASD and typically developing controls do not differ for grammar, while others report that morphosyntactic deficits are independent of cognitive skills in ASD. There is a need for a better understanding of syntax in ASD and its relation to, or dissociation from, nonverbal abilities. Aims Syntax in ASD was assessed by evaluating subject and object relative clause comprehension in adolescents and adults diagnosed with ASD with a performance IQ within the normal range, and with or without a history of language delay. Methods & Procedures Twenty-eight participants with ASD (mean age 21.8) and 28 age-matched controls (mean age 22.07) were required to point to a character designated by relative clauses that varied in syntactic complexity. Outcomes & Results Scores indicate that participants with ASD regardless of the language development history perform significantly worse than age-matched controls with object relative clauses. In addition, participants with ASD with a history of language delay (diagnosed with high-functioning autism in the DSM-IV-TR) perform worse on subject relatives than ASD participants without language delay (diagnosed with Asperger syndrome in the DSM-IV-TR), suggesting that these two groups do not have equivalent linguistic abilities. Performance IQ has a positive impact on the success of the task for the population with ASD. Conclusions & Implications This study reveals subtle grammatical difficulties remaining in adult individuals with ASD within normal IQ range as compared with age-matched peers. Even in the absence of a history of language delay in childhood, the results suggest that a slight deficit may nevertheless be present and go undetected by standardized language assessments. Both groups with and without language delay have a similar global performance on relative clause comprehension; however, the study also indicates that the participants with reported language delay show more difficulty with subject relatives than the participants without language delay, suggesting the presence of differences in linguistic abilities between these subgroups of ASD.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
Introduction: Although it seems plausible that sports performance relies on high-acuity foveal vision, it could be empirically shown that myoptic blur (up to +2 diopters) does not harm performance in sport tasks that require foveal information pick-up like golf putting (Bulson, Ciuffreda, & Hung, 2008). How myoptic blur affects peripheral performance is yet unknown. Attention might be less needed for processing visual cues foveally and lead to better performance because peripheral cues are better processed as a function of reduced foveal vision, which will be tested in the current experiment. Methods: 18 sport science students with self-reported myopia volunteered as participants, all of them regularly wearing contact lenses. Exclusion criteria comprised visual correction other than myopic, correction of astigmatism and use of contact lenses out of Swiss delivery area. For each of the participants, three pairs of additional contact lenses (besides their regular lenses; used in the “plano” condition) were manufactured with an individual overcorrection to a retinal defocus of +1 to +3 diopters (referred to as “+1.00 D”, “+2.00 D”, and “+3.00 D” condition, respectively). Gaze data were acquired while participants had to perform a multiple object tracking (MOT) task that required to track 4 out of 10 moving stimuli. In addition, in 66.7 % of all trials, one of the 4 targets suddenly stopped during the motion phase for a period of 0.5 s. Stimuli moved in front of a picture of a sports hall to allow for foveal processing. Due to the directional hypotheses, the level of significance for one-tailed tests on differences was set at α = .05 and posteriori effect sizes were computed as partial eta squares (ηρ2). Results: Due to problems with the gaze-data collection, 3 participants had to be excluded from further analyses. The expectation of a centroid strategy was confirmed because gaze was closer to the centroid than the target (all p < .01). In comparison to the plano baseline, participants more often recalled all 4 targets under defocus conditions, F(1,14) = 26.13, p < .01, ηρ2 = .65. The three defocus conditions differed significantly, F(2,28) = 2.56, p = .05, ηρ2 = .16, with a higher accuracy as a function of a defocus increase and significant contrasts between conditions +1.00 D and +2.00 D (p = .03) and +1.00 D and +3.00 D (p = .03). For stop trials, significant differences could neither be found between plano baseline and defocus conditions, F(1,14) = .19, p = .67, ηρ2 = .01, nor between the three defocus conditions, F(2,28) = 1.09, p = .18, ηρ2 = .07. Participants reacted faster in “4 correct+button” trials under defocus than under plano-baseline conditions, F(1,14) = 10.77, p < .01, ηρ2 = .44. The defocus conditions differed significantly, F(2,28) = 6.16, p < .01, ηρ2 = .31, with shorter response times as a function of a defocus increase and significant contrasts between +1.00 D and +2.00 D (p = .01) and +1.00 D and +3.00 D (p < .01). Discussion: The results show that gaze behaviour in MOT is not affected to a relevant degree by a visual overcorrection up to +3 diopters. Hence, it can be taken for granted that peripheral event detection was investigated in the present study. This overcorrection, however, does not harm the capability to peripherally track objects. Moreover, if an event has to be detected peripherally, neither response accuracy nor response time is negatively affected. Findings could claim considerable relevance for all sport situations in which peripheral vision is required which now needs applied studies on this topic. References: Bulson, R. C., Ciuffreda, K. J., & Hung, G. K. (2008). The effect of retinal defocus on golf putting. Ophthalmic and Physiological Optics, 28, 334-344.
Resumo:
Introduction. Selectively manned units have a long, international history, both military and civilian. Some examples include SWAT teams, firefighters, the FBI, the DEA, the CIA, and military Special Operations. These special duty operators are individuals who perform a highly skilled and dangerous job in a unique environment. A significant amount of money is spent by the Department of Defense (DoD) and other federal agencies to recruit, select, train, equip and support these operators. When a critical incident or significant life event occurs, that jeopardizes an operator's performance; there can be heavy losses in terms of training, time, money, and potentially, lives. In order to limit the number of critical incidents, selection processes have been developed over time to “select out” those individuals most likely to perform below desired performance standards under pressure or stress and to "select in" those with the "right stuff". This study is part of a larger program evaluation to assess markers that identify whether a person will fail under the stresses in a selectively manned unit. The primary question of the study is whether there are indicators in the selection process that signify potential negative performance at a later date. ^ Methods. The population being studied included applicants to a selectively manned DoD organization between 1993 and 2001 as part of a unit assessment and selection process (A&S). Approximately 1900 A&S records were included in the analysis. Over this nine year period, seventy-two individuals were determined to have had a critical incident. A critical incident can come in the form of problems with the law, personal, behavioral or family problems, integrity issues, and skills deficit. Of the seventy-two individuals, fifty-four of these had full assessment data and subsequent supervisor performance ratings which assessed how an individual performed while on the job. This group was compared across a variety of variables including demographics and psychometric testing with a group of 178 individuals who did not have a critical incident and had been determined to be good performers with positive ratings by their supervisors.^ Results. In approximately 2004, an online pre-screen survey was developed in the hopes of preselecting out those individuals with items that would potentially make them ineligible for selection to this organization. This survey has aided the organization to increase its selection rates and save resources in the process. (Patterson, Howard Smith, & Fisher, Unit Assessment and Selection Project, 2008) When the same prescreen was used on the critical incident individuals, it was found that over 60% of the individuals would have been flagged as unacceptable. This would have saved the organization valuable resources and heartache.^ There were some subtle demographic differences between the two groups (i.e. those with critical incidents were almost twice as likely to be divorced compared with the positive performers). Upon comparison of Psychometric testing several items were noted to be different. The two groups were similar when their IQ levels were compared using the Multidimensional Aptitude Battery (MAB). When looking at the Minnesota Multiphasic Personality Inventory (MMPI), there appeared to be a difference on the MMPI Social Introversion; the Critical Incidence group scored somewhat higher. When analysis was done, the number of MMPI Critical Items between the two groups was similar as well. When scores on the NEO Personality Inventory (NEO) were compared, the critical incident individuals tended to score higher on Openness and on its subscales (Ideas, Actions, and Feelings). There was a positive correlation between Total Neuroticism T Score and number of MMPI critical items.^ Conclusions. This study shows that the current pre-screening process is working and would have saved the organization significant resources. ^ If one was to develop a profile of a candidate who potentially could suffer a critical incident and subsequently jeopardize the unit, mission and the safety of the public they would look like the following: either divorced or never married, score high on the MMPI in Social Introversion, score low on MMPI with an "excessive" amount of MMPI critical items; and finally scores high on the NEO Openness and subscales Ideas, Feelings, and Actions.^ Based on the results gleaned from the analysis in this study there seems to be several factors, within psychometric testing, that when taken together, will aid the evaluators in selecting only the highest quality operators in order to save resources and to help protect the public from unfortunate critical incidents which may adversely affect our health and safety.^
Resumo:
Characteristics of Medicare-certified home health agencies in Texas and the contributions of selected agency characteristics on home health care costs were examined. Cost models were developed and estimated for both nursing and total visit costs using multiple regression procedures. The models included home health agency size, profit status, control, hospital-based affiliation, contract-cost ratio, service provision, competition, urban-rural input-price differences, and selected measures of patient case-mix. The study population comprised 314 home health agencies in Texas that had been certified at least one year on July, 1, 1986. Data for the analysis were obtained from Medicare Cost Reports for fiscal year ending between July 1, 1985 to June 30, 1986.^ Home health agency size, as measured by the logs of nursing and total visits, has a statistically significant negative linear relationship with nursing visit and total visit costs. Nursing and total visit costs decrease at a declining rate as size increases. The size-cost relationship is not altered when controlling for any other agency characteristic. The number of visits per patient per year, a measure of patient case-mix, is also negatively related to costs, suggesting that costs decline with care of chronic patients. Hospital-based affiliation and urban location are positively associated with costs. Together, the four characteristics explain 19 percent of the variance in nursing visit costs and 24 percent of the variance in total visit costs.^ Profit status and control, although correlated with other agency characteristics, exhibit no observable effect on costs. Although no relationship was found between costs and competition, contract cost ratio, or the provision on non-reimburseable services, no conclusions can be made due to problems with measurement of these variables. ^
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.