787 resultados para Gradient-based approaches


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical therapy students must apply the relevant information learned in their academic and clinical experience to problem solve in treating patients. I compared the clinical cognitive competence in patient care of second-year masters students enrolled in two different curricular programs: modified problem-based (M P-B; n = 27) and subject-centered (S-C; n = 41). Main features of S-C learning include lecture and demonstration as the major teaching strategies and no exposure to patients or problem solving learning until the sciences (knowledge) have been taught. Comparatively, main features of M P-B learning include case study in small student groups as the main teaching strategy, early and frequent exposure to patients, and knowledge and problem solving skills learned together for each specific case. Basic and clinical orthopedic knowledge was measured with a written test with open-ended items. Problem solving skills were measured with a written case study patient problem test yielding three subscores: assessment, problem identification, and treatment planning. ^ Results indicated that among the demographic and educational characteristics analyzed, there was a significant difference between groups on ethnicity, bachelor degree type, admission GPA, and current GPA, but there was no significant difference on gender, age, possession of a physical therapy assistant license, and GRE score. In addition, the M P-B group achieved a significantly higher adjusted mean score on the orthopedic knowledge test after controlling for GRE scores. The S-C group achieved a significantly higher adjusted mean total score and treatment management subscore on the case study test after controlling for orthopedic knowledge test scores. These findings did not support their respective research hypotheses. There was no significant difference between groups on the assessment and problem identification subscores of the case study test. The integrated M P-B approach promoted superior retention of basic and clinical science knowledge. The results on problem solving skills were mixed. The S-C approach facilitated superior treatment planning skills, but equivalent patient assessment and problem identification skills by emphasizing all equally and exposing the students to more patients with a wider variety of orthopedic physical therapy needs than in the M P-B approach. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability for the citizens of a nation to determine their own representation has long been regarded as one of the most critical objectives of any electoral system. Without having the assurance of equality in representation, the fundamental nature and operation of the political system is severely undermined. Given the centuries of institutional reforms and population changes in the American system, Congressional Redistricting stands as an institution whereby this promise of effective representation can either be fulfilled or denied. The broad set of processes that encapsulate Congres- sional Redistricting have been discussed, experimented, and modified to achieve clear objectives and have long been understood to be important. Questions remain about how the dynamics which link all of these processes operate and what impact the real- ities of Congressional Redistricting hold for representation in the American system. This dissertation examines three aspects of how Congressional Redistricting in the Untied States operates in accordance with the principle of “One Person, One Vote.” By utilizing data and data analysis techniques of Geographic Information Systems (GIS), this dissertation seeks to address how Congressional Redistricting impacts the principle of one person, one vote from the standpoint of legislator accountability, redistricting institutions, and the promise of effective minority representation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress in cognitive neuroscience relies on methodological developments to increase the specificity of knowledge obtained regarding brain function. For example, in functional neuroimaging the current trend is to study the type of information carried by brain regions rather than simply compare activation levels induced by task manipulations. In this context noninvasive transcranial brain stimulation (NTBS) in the study of cognitive functions may appear coarse and old fashioned in its conventional uses. However, in their multitude of parameters, and by coupling them with behavioral manipulations, NTBS protocols can reach the specificity of imaging techniques. Here we review the different paradigms that have aimed to accomplish this in both basic science and clinical settings and follow the general philosophy of information-based approache

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-time data of key performance enablers in logistics warehouses are of growing importance as they permit decision-makers to instantaneously react to alerts, deviations and damages. Several technologies appear as adequate data sources to collect the information required in order to achieve the goal. In the present re-search paper, the load status of the fork of a forklift is to be recognized with the help of a sensor-based and a camera-based solution approach. The comparison of initial experimentation results yields a statement about which direction to pursue for promising further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper advocates strategies, processes and practices that enable: livelihoods approaches rather than resource-based approaches, ‘direct’ institutional and policy development, rather than ‘project demonstrations’, and support for regional, national and local communications. (Pdf contains 12 pages).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of glucagon-like peptide (GLP)-1-based treatment approaches for type 2 diabetes mellitus (T2DM) is increasing. Although self-monitoring of blood glucose (SMBG) has been performed in numerous studies on GLP-1 analogs and dipeptidyl peptidase-4 inhibitors, the potential role of SMBG in GLP-1-based treatment strategies has not been elaborated. The expert recommendation suggests individualized SMBG strategies in GLP-1-based treatment approaches and suggests simple and clinically applicable SMBG schemes. Potential benefits of SMBG in GLP-1-based treatment approaches are early assessment of treatment success or failure, timely modification of treatment, detection of hypoglycemic episodes, assessment of glucose excursions, and support of diabetes management and diabetes education. Its length and frequency should depend on the clinical setting and the quality of metabolic control. It is considered to play an important role for the optimization of diabetes management in T2DM patients treated with GLP-1-based approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Independent component analysis (ICA) or seed based approaches (SBA) in functional magnetic resonance imaging blood oxygenation level dependent (BOLD) data became widely applied tools to identify functionally connected, large scale brain networks. Differences between task conditions as well as specific alterations of the networks in patients as compared to healthy controls were reported. However, BOLD lacks the possibility of quantifying absolute network metabolic activity, which is of particular interest in the case of pathological alterations. In contrast, arterial spin labeling (ASL) techniques allow quantifying absolute cerebral blood flow (CBF) in rest and in task-related conditions. In this study, we explored the ability of identifying networks in ASL data using ICA and to quantify network activity in terms of absolute CBF values. Moreover, we compared the results to SBA and performed a test-retest analysis. Twelve healthy young subjects performed a fingertapping block-design experiment. During the task pseudo-continuous ASL was measured. After CBF quantification the individual datasets were concatenated and subjected to the ICA algorithm. ICA proved capable to identify the somato-motor and the default mode network. Moreover, absolute network CBF within the separate networks during either condition could be quantified. We could demonstrate that using ICA and SBA functional connectivity analysis is feasible and robust in ASL-CBF data. CBF functional connectivity is a novel approach that opens a new strategy to evaluate differences of network activity in terms of absolute network CBF and thus allows quantifying inter-individual differences in the resting state and task-related activations and deactivations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).