830 resultados para Gradient-based approaches


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability for the citizens of a nation to determine their own representation has long been regarded as one of the most critical objectives of any electoral system. Without having the assurance of equality in representation, the fundamental nature and operation of the political system is severely undermined. Given the centuries of institutional reforms and population changes in the American system, Congressional Redistricting stands as an institution whereby this promise of effective representation can either be fulfilled or denied. The broad set of processes that encapsulate Congres- sional Redistricting have been discussed, experimented, and modified to achieve clear objectives and have long been understood to be important. Questions remain about how the dynamics which link all of these processes operate and what impact the real- ities of Congressional Redistricting hold for representation in the American system. This dissertation examines three aspects of how Congressional Redistricting in the Untied States operates in accordance with the principle of “One Person, One Vote.” By utilizing data and data analysis techniques of Geographic Information Systems (GIS), this dissertation seeks to address how Congressional Redistricting impacts the principle of one person, one vote from the standpoint of legislator accountability, redistricting institutions, and the promise of effective minority representation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress in cognitive neuroscience relies on methodological developments to increase the specificity of knowledge obtained regarding brain function. For example, in functional neuroimaging the current trend is to study the type of information carried by brain regions rather than simply compare activation levels induced by task manipulations. In this context noninvasive transcranial brain stimulation (NTBS) in the study of cognitive functions may appear coarse and old fashioned in its conventional uses. However, in their multitude of parameters, and by coupling them with behavioral manipulations, NTBS protocols can reach the specificity of imaging techniques. Here we review the different paradigms that have aimed to accomplish this in both basic science and clinical settings and follow the general philosophy of information-based approache

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-time data of key performance enablers in logistics warehouses are of growing importance as they permit decision-makers to instantaneously react to alerts, deviations and damages. Several technologies appear as adequate data sources to collect the information required in order to achieve the goal. In the present re-search paper, the load status of the fork of a forklift is to be recognized with the help of a sensor-based and a camera-based solution approach. The comparison of initial experimentation results yields a statement about which direction to pursue for promising further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of glucagon-like peptide (GLP)-1-based treatment approaches for type 2 diabetes mellitus (T2DM) is increasing. Although self-monitoring of blood glucose (SMBG) has been performed in numerous studies on GLP-1 analogs and dipeptidyl peptidase-4 inhibitors, the potential role of SMBG in GLP-1-based treatment strategies has not been elaborated. The expert recommendation suggests individualized SMBG strategies in GLP-1-based treatment approaches and suggests simple and clinically applicable SMBG schemes. Potential benefits of SMBG in GLP-1-based treatment approaches are early assessment of treatment success or failure, timely modification of treatment, detection of hypoglycemic episodes, assessment of glucose excursions, and support of diabetes management and diabetes education. Its length and frequency should depend on the clinical setting and the quality of metabolic control. It is considered to play an important role for the optimization of diabetes management in T2DM patients treated with GLP-1-based approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Independent component analysis (ICA) or seed based approaches (SBA) in functional magnetic resonance imaging blood oxygenation level dependent (BOLD) data became widely applied tools to identify functionally connected, large scale brain networks. Differences between task conditions as well as specific alterations of the networks in patients as compared to healthy controls were reported. However, BOLD lacks the possibility of quantifying absolute network metabolic activity, which is of particular interest in the case of pathological alterations. In contrast, arterial spin labeling (ASL) techniques allow quantifying absolute cerebral blood flow (CBF) in rest and in task-related conditions. In this study, we explored the ability of identifying networks in ASL data using ICA and to quantify network activity in terms of absolute CBF values. Moreover, we compared the results to SBA and performed a test-retest analysis. Twelve healthy young subjects performed a fingertapping block-design experiment. During the task pseudo-continuous ASL was measured. After CBF quantification the individual datasets were concatenated and subjected to the ICA algorithm. ICA proved capable to identify the somato-motor and the default mode network. Moreover, absolute network CBF within the separate networks during either condition could be quantified. We could demonstrate that using ICA and SBA functional connectivity analysis is feasible and robust in ASL-CBF data. CBF functional connectivity is a novel approach that opens a new strategy to evaluate differences of network activity in terms of absolute network CBF and thus allows quantifying inter-individual differences in the resting state and task-related activations and deactivations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, we present a neural network (NN) based method designed for 3D rigid-body registration of FMRI time series, which relies on a limited number of Fourier coefficients of the images to be aligned. These coefficients, which are comprised in a small cubic neighborhood located at the first octant of a 3D Fourier space (including the DC component), are then fed into six NN during the learning stage. Each NN yields the estimates of a registration parameter. The proposed method was assessed for 3D rigid-body transformations, using DC neighborhoods of different sizes. The mean absolute registration errors are of approximately 0.030 mm in translations and 0.030 deg in rotations, for the typical motion amplitudes encountered in FMRI studies. The construction of the training set and the learning stage are fast requiring, respectively, 90 s and 1 to 12 s, depending on the number of input and hidden units of the NN. We believe that NN-based approaches to the problem of FMRI registration can be of great interest in the future. For instance, NN relying on limited K-space data (possibly in navigation echoes) can be a valid solution to the problem of prospective (in frame) FMRI registration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When unmanned underwater vehicles (UUVs) perform missions near the ocean floor, optical sensors can be used to improve local navigation. Video mosaics allow to efficiently process the images acquired by the vehicle, and also to obtain position estimates. We discuss in this paper the role of lens distortions in this context, proving that degenerate mosaics have their origin not only in the selected motion model or in registration errors, but also in the cumulative effect of radial distortion residuals. Additionally, we present results on the accuracy of different feature-based approaches for self-correction of lens distortions that may guide the choice of appropriate techniques for correcting distortions

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The amount of sequence data available today highly facilitates the access to genes from many gene families. Primers amplifying the desired genes over a range of species are readily obtained by aligning conserved gene regions, and laborious gene isolation procedures can often be replaced by quicker PCR-based approaches. However, in the case of multigene families, PCR-based approaches bear the often ignored risk of incomplete isolation of family members. This problem is most prominent in gene families with highly variable and thus unpredictable number of gene copies among species, such as in the major histocompatibility complex (MHC). In this study, we (i) report new primers for the isolation of the MHC class IIB (MHCIIB) gene family in birds and (ii) share our experience with isolating MHCIIB genes from an unprecedented number of avian species from all over the avian phylogeny. We report important and usually underappreciated problems encountered during PCR-based multigene family isolation and provide a collection of measures to help significantly improving the chance of successfully isolating complete multigene families using PCR-based approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim  Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location  World-wide.Methods  Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results  Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions  By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. In this paper, we focus on the prediction of drug concentrations using Support Vector Machines (S VM) and the analysis of the influence of each feature to the prediction results. Our study shows that SVM-based approaches achieve similar prediction results compared with pharmacokinetic model. The two proposed example-based SVM methods demonstrate that the individual features help to increase the accuracy in the predictions of drug concentration with a reduced library of training data.