994 resultados para Explicit method


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the last several years there has been an increase in the amount of qualitative research using in-depth interviews and comprehensive content analyses in sport psychology. However, no explicit method has been provided to deal with the large amount of unstructured data. This article provides common guidelines for organizing and interpreting unstructured data. Two main operations are suggested and discussed: first, coding meaningful text segments, or creating tags, and second, regrouping similar text segments,or creating categories. Furthermore, software programs for the microcomputer are presented as away to facilitate the organization and interpretation of qualitative data

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper compares three alternative numerical algorithms applied to a nonlinear metal cutting problem. One algorithm is based on an explicit method and the other two are implicit. Domain decomposition (DD) is used to break the original domain into subdomains, each containing a properly connected, well-formulated and continuous subproblem. The serial version of the explicit algorithm is implemented in FORTRAN and its parallel version uses MPI (Message Passing Interface) calls. One implicit algorithm is implemented by coupling the state-of-the-art PETSc (Portable, Extensible Toolkit for Scientific Computation) software with in-house software in order to solve the subproblems. The second implicit algorithm is implemented completely within PETSc. PETSc uses MPI as the underlying communication library. Finally, a 2D example is used to test the algorithms and various comparisons are made.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High index Differential Algebraic Equations (DAEs) force standard numerical methods to lower order. Implicit Runge-Kutta methods such as RADAU5 handle high index problems but their fully implicit structure creates significant overhead costs for large problems. Singly Diagonally Implicit Runge-Kutta (SDIRK) methods offer lower costs for integration. This paper derives a four-stage, index 2 Explicit Singly Diagonally Implicit Runge-Kutta (ESDIRK) method. By introducing an explicit first stage, the method achieves second order stage calculations. After deriving and solving appropriate order conditions., numerical examples are used to test the proposed method using fixed and variable step size implementations. (C) 2001 IMACS. Published by Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present the explicit series solution of a specific mathematical model from the literature, the Deng bursting model, that mimics the glucose-induced electrical activity of pancreatic beta-cells (Deng, 1993). To serve to this purpose, we use a technique developed to find analytic approximate solutions for strongly nonlinear problems. This analytical algorithm involves an auxiliary parameter which provides us with an efficient way to ensure the rapid and accurate convergence to the exact solution of the bursting model. By using the homotopy solution, we investigate the dynamical effect of a biologically meaningful bifurcation parameter rho, which increases with the glucose concentration. Our analytical results are found to be in excellent agreement with the numerical ones. This work provides an illustration of how our understanding of biophysically motivated models can be directly enhanced by the application of a newly analytic method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Increasing the appropriateness of use of upper gastrointestinal (GI) endoscopy is important to improve quality of care while at the same time containing costs. This study explored whether detailed explicit appropriateness criteria significantly improve the diagnostic yield of upper GI endoscopy. METHODS: Consecutive patients referred for upper GI endoscopy at 6 centers (1 university hospital, 2 district hospitals, 3 gastroenterology practices) were prospectively included over a 6-month period. After controlling for disease presentation and patient characteristics, the relationship between the appropriateness of upper GI endoscopy, as assessed by explicit Swiss criteria developed by the RAND/UCLA panel method, and the presence of relevant endoscopic lesions was analyzed. RESULTS: A total of 2088 patients (60% outpatients, 57% men) were included. Analysis was restricted to the 1681 patients referred for diagnostic upper GI endoscopy. Forty-six percent of upper GI endoscopies were judged to be appropriate, 15% uncertain, and 39% inappropriate by the explicit criteria. No cancer was found in upper GI endoscopies judged to be inappropriate. Upper GI endoscopies judged appropriate or uncertain yielded significantly more relevant lesions (60%) than did those judged to be inappropriate (37%; odds ratio 2.6: 95% CI [2.2, 3.2]). In multivariate analyses, the diagnostic yield of upper GI endoscopy was significantly influenced by appropriateness, patient gender and age, treatment setting, and symptoms. CONCLUSIONS: Upper GI endoscopies performed for appropriate indications resulted in detecting significantly more clinically relevant lesions than did those performed for inappropriate indications. In addition, no upper GI endoscopy that resulted in a diagnosis of cancer was judged to be inappropriate. The use of such criteria improves patient selection for upper GI endoscopy and can thus contribute to efforts aimed at enhancing the quality and efficiency of care. (Gastrointest Endosc 2000;52:333-41).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ulcerative colitis (UC) is a chronic disease with a wide variety of treatment options many of which are not evidence based. Supplementing available guidelines, which are often broadly defined, consensus-based and generally not tailored to specifically reflect the individual patient situation, we developed explicit appropriateness criteria to assist, and improve treatment decisions. Methods: We used the RAND appropriateness method which does not force consensus. An extensive literature review was compiled based on and supplementing, where necessary, the ECCO UC 2011 guidelines. EPATUC (endorsed by ECCO) was formed by 8 gastroenterologists, 2 surgeons and 2 general practitioners from throughout Europe. Clinical scenarios reflecting practice were rated on a 9-point scale from 1 (extremely inappropriate) to 9 (extremely appropriate), based on the expert's experience and the available literature. After extensive discussion, all scenarios were re-rated at a two-day panel meeting. Median and disagreement were used to categorize ratings into 3 categories: appropriate, uncertain and inappropriate. Results: 718 clinical scenarios were rated, structured in 13 main clinical presentations: not refractory (n=64) or refractory (n=33) proctitis, mild to moderate left-sided (n=72) or extensive (n=48) colitis, severe colitis (n=36), steroid-dependant colitis (n=36), steroid-refractory colitis (n=55), acute pouchitis (n=96), maintenance of remission (n=248), colorectal cancer prevention (n=9) and fulminant colitis (n=9). Overall, 100 indications were judged appropriate (14%), 129 uncertain (18%) and 489 inappropriate (68%). Disagreement between experts was very low (6%). Conclusion: For the very first time, explicit appropriateness criteria for therapy of UC were developed that allow both specific and rapid therapeutic decision making and prospective assessment of treatment appropriateness. Comparison of these detailed scenarios with patient profiles encountered in the Swiss IBD cohort study indicates good concordance. EPATUC criteria will be freely accessible on the internet (epatuc.ch).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Ulcerative colitis (UC) is a chronic disease with a wide variety of treatment options many of which are not evidence based. Supplementing available guidelines, which are often broadly defined, consensus-based and generally not tailored to specifically reflect the individual patient situation, we developed explicit appropriateness criteria to assist, and improve treatment decisions. Methods: We used the RAND appropriateness method which does not force consensus. An extensive literature review was compiled based on and supplementing, where necessary, the ECCO UC 2011 guidelines. EPATUC (endorsed by ECCO) was formed by 7 gastroenterologists, 2 surgeons and 2 general practitioners from throughout Europe. Clinical scenarios reflecting practice were rated on a 9-point scale from 1 (extremely inappropriate) to 9 (extremely appropriate), based on the expert's experience and the available literature. After extensive discussion, all scenarios were re-rated at a two-day panel meeting. Median and disagreement (D) were used to categorize ratings into 3 categories: appropriate (A), uncertain (U) and inappropriate (I). Results: 718 clinical scenarios were rated, structured in 13 main clinical presentations: not refractory (n = 64) or refractory (n = 33) proctitis, mild to moderate left-sided (n = 72) or extensive (n = 48) colitis, severe colitis (n = 36), steroid- dependant colitis (n = 36), steroid-refractory colitis (n = 55), acute pouchitis (n = 96), maintenance of remission (n = 248), colorectal cancer prevention (n = 9) and fulminant colitis (n = 9). Overall, 100 indications were judged appropriate (14%), 129 uncertain (18%) and 489 inappropriate (68%). Disagreement between experts was very low (6%). Conclusions: For the very first time, explicit appropriateness criteria for therapy of UC were developed that allow both specific and rapid therapeutic decision making and prospective assessment of treatment appropriateness. Comparison of these detailed scenarios with patient profiles encountered in the Swiss IBD cohort study indicates good concordance. EPATUC criteria will be freely accessible on the internet (epatuc.ch)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life cycle analyses (LCA) approaches require adaptation to reflect the increasing delocalization of production to emerging countries. This work addresses this challenge by establishing a country-level, spatially explicit life cycle inventory (LCI). This study comprises three separate dimensions. The first dimension is spatial: processes and emissions are allocated to the country in which they take place and modeled to take into account local factors. Emerging economies China and India are the location of production, the consumption occurs in Germany, an Organisation for Economic Cooperation and Development country. The second dimension is the product level: we consider two distinct textile garments, a cotton T-shirt and a polyester jacket, in order to highlight potential differences in the production and use phases. The third dimension is the inventory composition: we track CO2, SO2, NO (x), and particulates, four major atmospheric pollutants, as well as energy use. This third dimension enriches the analysis of the spatial differentiation (first dimension) and distinct products (second dimension). We describe the textile production and use processes and define a functional unit for a garment. We then model important processes using a hierarchy of preferential data sources. We place special emphasis on the modeling of the principal local energy processes: electricity and transport in emerging countries. The spatially explicit inventory is disaggregated by country of location of the emissions and analyzed according to the dimensions of the study: location, product, and pollutant. The inventory shows striking differences between the two products considered as well as between the different pollutants considered. For the T-shirt, over 70% of the energy use and CO2 emissions occur in the consuming country, whereas for the jacket, more than 70% occur in the producing country. This reversal of proportions is due to differences in the use phase of the garments. For SO2, in contrast, over two thirds of the emissions occur in the country of production for both T-shirt and jacket. The difference in emission patterns between CO2 and SO2 is due to local electricity processes, justifying our emphasis on local energy infrastructure. The complexity of considering differences in location, product, and pollutant is rewarded by a much richer understanding of a global production-consumption chain. The inclusion of two different products in the LCI highlights the importance of the definition of a product's functional unit in the analysis and implications of results. Several use-phase scenarios demonstrate the importance of consumer behavior over equipment efficiency. The spatial emission patterns of the different pollutants allow us to understand the role of various energy infrastructure elements. The emission patterns furthermore inform the debate on the Environmental Kuznets Curve, which applies only to pollutants which can be easily filtered and does not take into account the effects of production displacement. We also discuss the appropriateness and limitations of applying the LCA methodology in a global context, especially in developing countries. Our spatial LCI method yields important insights in the quantity and pattern of emissions due to different product life cycle stages, dependent on the local technology, emphasizing the importance of consumer behavior. From a life cycle perspective, consumer education promoting air-drying and cool washing is more important than efficient appliances. Spatial LCI with country-specific data is a promising method, necessary for the challenges of globalized production-consumption chains. We recommend inventory reporting of final energy forms, such as electricity, and modular LCA databases, which would allow the easy modification of underlying energy infrastructure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is dedicated to search engine marketing (SEM). It aims for developing a business model of SEM firms and to provide explicit research of trustworthy practices of virtual marketing companies. Optimization is a general term that represents a variety of techniques and methods of the web pages promotion. The research addresses optimization as a business activity, and it explains its role for the online marketing. Additionally, it highlights issues of unethical techniques utilization by marketers which created relatively negative attitude to them on the Internet environment. Literature insight combines in the one place both technical and economical scientific findings in order to highlight technological and business attributes incorporated in SEM activities. Empirical data regarding search marketers was collected via e-mail questionnaires. 4 representatives of SEM companies were engaged in this study to accomplish the business model design. Additionally, the fifth respondent was a representative of the search engine portal, who provided insight on relations between search engines and marketers. Obtained information of the respondents was processed qualitatively. Movement of commercial organizations to the online market increases demand on promotional programs. SEM is the largest part of online marketing, and it is a prerogative of search engines portals. However, skilled users, or marketers, are able to implement long-term marketing programs by utilizing web page optimization techniques, key word consultancy or content optimization to increase web site visibility to search engines and, therefore, user’s attention to the customer pages. SEM firms are related to small knowledge-intensive businesses. On the basis of data analysis the business model was constructed. The SEM model includes generalized constructs, although they represent a wider amount of operational aspects. Constructing blocks of the model includes fundamental parts of SEM commercial activity: value creation, customer, infrastructure and financial segments. Also, approaches were provided on company’s differentiation and competitive advantages evaluation. It is assumed that search marketers should apply further attempts to differentiate own business out of the large number of similar service providing companies. Findings indicate that SEM companies are interested in the increasing their trustworthiness and the reputation building. Future of the search marketing is directly depending on search engines development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (M.Ed.)-- Brock University, 1995.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An abundant literature has demonstrated the benefits of empathy for intergroup relations (e.g., Batson, Chang, Orr, & Rowland, 2002). In addition, empathy has been identified as the mechanism by which various successful prejudice-reduction procedures impact attitudes and behaviour (e.g., Costello & Hodson, 2010). However, standard explicit techniques used in empathy-prejudice research have a number of potential limitations (e.g., resistance; McGregor, 1993). The present project explored an alternative technique, subliminally priming (i.e., outside of awareness) empathy-relevant terms (Study 1), or empathy itself (Study 2). Study 1 compared the effects of exposure to subliminal empathy-relevant primes (e.g., compassion) versus no priming and priming the opposite of empathy (e.g., indifference) on prejudice (i.e., negative attitudes), discrimination (i.e., resource allocation), and helping behaviour (i.e., willingness to empower, directly assist, or expect group change) towards immigrants. Relative to priming the opposite of empathy, participants exposed to primes of empathy-relevant constructs expressed less prejudice and were more willingness to empower immigrants. In addition, the effects were not moderated by individual differences in prejudice-relevant variables (i.e., Disgust Sensitivity, Intergroup Disgust-Sensitivity, Intergroup Anxiety, Social Dominance Orientation, Right-wing Authoritarianism). Study 2 considered a different target category (i.e., Blacks) and attempted to strengthen the effects found by comparing the impact of subliminal empathy primes (relative to no prime or subliminal primes of empathy paired with Blacks) on explicit prejudice towards marginalized groups and Blacks, willingness to help marginalized groups and Blacks, as well as implicit prejudice towards Blacks. In addition, Study 2 considered potential mechanisms for the predicted effects; specifically, general empathy, affective empathy towards Blacks, cognitive empathy towards Blacks, positive mood, and negative mood. Unfortunately, using subliminal empathy primes “backfired”, such that exposure to subliminal empathy primes (relative to no prime) heightened prejudice towards marginalized groups and Blacks, and led to stronger expectations that marginalized groups and Blacks improve their own situation. However, exposure to subliminal primes pairing empathy with Blacks (relative to subliminal empathy primes alone) resulted in less prejudice towards marginalized groups and more willingness to directly assist Blacks, as expected. Interestingly, exposure to subliminal primes of empathy paired with Blacks (vs. empathy alone) resulted in more pro-White bias on the implicit prejudice measure. Study 2 did not find that the potential mediators measured explained the effects found. Overall, the results of the present project do not provide strong support for the use of subliminal empathy primes for improving intergroup relations. In fact, the results of Study 2 suggest that the use of subliminal empathy primes may even backfire. The implications for intergroup research on empathy and priming procedures generally are discussed.