919 resultados para Optical pattern recognition Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fenofibrate, widely used for the treatment of dyslipidemia, activates the nuclear receptor, peroxisome proliferator-activated receptor alpha. However, liver toxicity, including liver cancer, occurs in rodents treated with fibrate drugs. Marked species differences occur in response to fibrate drugs, especially between rodents and humans, the latter of which are resistant to fibrate-induced cancer. Fenofibrate metabolism, which also shows species differences, has not been fully determined in humans and surrogate primates. In the present study, the metabolism of fenofibrate was investigated in cynomolgus monkeys by ultraperformance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-QTOFMS)-based metabolomics. Urine samples were collected before and after oral doses of fenofibrate. The samples were analyzed in both positive-ion and negative-ion modes by UPLC-QTOFMS, and after data deconvolution, the resulting data matrices were subjected to multivariate data analysis. Pattern recognition was performed on the retention time, mass/charge ratio, and other metabolite-related variables. Synthesized or purchased authentic compounds were used for metabolite identification and structure elucidation by liquid chromatographytandem mass spectrometry. Several metabolites were identified, including fenofibric acid, reduced fenofibric acid, fenofibric acid ester glucuronide, reduced fenofibric acid ester glucuronide, and compound X. Another two metabolites (compound B and compound AR), not previously reported in other species, were characterized in cynomolgus monkeys. More importantly, previously unknown metabolites, fenofibric acid taurine conjugate and reduced fenofibric acid taurine conjugate were identified, revealing a previously unrecognized conjugation pathway for fenofibrate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A joint reprocessing of GPS, GLONASS and SLR observations has been carried out at TU Dresden, TU Munich, AIUB and ETH Zurich. Common a priori models have been applied for the processing of all types of observation to ensure both consistent parameter estimates and the rigorous combination of microwave and optical measurements. Based on that reprocessing results, we evaluate the impact of adding GLONASS observations to the standard GPS data processing. In particular, changes in station position time series and day boundary overlaps of consecutive satellite arcs are analyzed. In addition, the GNSS orbits derived from microwave measurements are validated using independent SLR range measurements. Our SLR residuals indicate a significant improvement compared to previous results. Furthermore, we evaluate the performance of our high-rate (30s) combined GNSS satellite clocks and discuss associated zero-difference phase residuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of fitting a union of subspaces to a collection of data points drawn from one or more subspaces and corrupted by noise and/or gross errors. We pose this problem as a non-convex optimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean and self-expressive dictionary plus a matrix of noise and/or gross errors. By self-expressive we mean a dictionary whose atoms can be expressed as linear combinations of themselves with low-rank coefficients. In the case of noisy data, our key contribution is to show that this non-convex matrix decomposition problem can be solved in closed form from the SVD of the noisy data matrix. The solution involves a novel polynomial thresholding operator on the singular values of the data matrix, which requires minimal shrinkage. For one subspace, a particular case of our framework leads to classical PCA, which requires no shrinkage. For multiple subspaces, the low-rank coefficients obtained by our framework can be used to construct a data affinity matrix from which the clustering of the data according to the subspaces can be obtained by spectral clustering. In the case of data corrupted by gross errors, we solve the problem using an alternating minimization approach, which combines our polynomial thresholding operator with the more traditional shrinkage-thresholding operator. Experiments on motion segmentation and face clustering show that our framework performs on par with state-of-the-art techniques at a reduced computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows) as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5) and content (ratio of left and right pointing arrows within a set) of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search). The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chondrocyte gene regulation is important for the generation and maintenance of cartilage tissues. Several regulatory factors have been identified that play a role in chondrogenesis, including the positive transacting factors of the SOX family such as SOX9, SOX5, and SOX6, as well as negative transacting factors such as C/EBP and delta EF1. However, a complete understanding of the intricate regulatory network that governs the tissue-specific expression of cartilage genes is not yet available. We have taken a computational approach to identify cis-regulatory, transcription factor (TF) binding motifs in a set of cartilage characteristic genes to better define the transcriptional regulatory networks that regulate chondrogenesis. Our computational methods have identified several TFs, whose binding profiles are available in the TRANSFAC database, as important to chondrogenesis. In addition, a cartilage-specific SOX-binding profile was constructed and used to identify both known, and novel, functional paired SOX-binding motifs in chondrocyte genes. Using DNA pattern-recognition algorithms, we have also identified cis-regulatory elements for unknown TFs. We have validated our computational predictions through mutational analyses in cell transfection experiments. One novel regulatory motif, N1, found at high frequency in the COL2A1 promoter, was found to bind to chondrocyte nuclear proteins. Mutational analyses suggest that this motif binds a repressive factor that regulates basal levels of the COL2A1 promoter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finite depth of field of a real camera can be used to estimate the depth structure of a scene. The distance of an object from the plane in focus determines the defocus blur size. The shape of the blur depends on the shape of the aperture. The blur shape can be designed by masking the main lens aperture. In fact, aperture shapes different from the standard circular aperture give improved accuracy of depth estimation from defocus blur. We introduce an intuitive criterion to design aperture patterns for depth from defocus. The criterion is independent of a specific depth estimation algorithm. We formulate our design criterion by imposing constraints directly in the data domain and optimize the amount of depth information carried by blurred images. Our criterion is a quadratic function of the aperture transmission values. As such, it can be numerically evaluated to estimate optimized aperture patterns quickly. The proposed mask optimization procedure is applicable to different depth estimation scenarios. We use it for depth estimation from two images with different focus settings, for depth estimation from two images with different aperture shapes as well as for depth estimation from a single coded aperture image. In this work we show masks obtained with this new evaluation criterion and test their depth discrimination capability using a state-of-the-art depth estimation algorithm.