104 resultados para Normalization constraint
Resumo:
Marked changes in the content of protein in the diet affects the rat"s pattern of growth, but there is not any data on the effects to moderate changes. Here we used a genetically obese rat strain (Zucker) to examine the metabolic modifications induced to moderate changes in the content of protein of diets, doubling (high-protein (HP): 30%) or halving (low-protein (LP): 8%) the content of protein of reference diet (RD: 16%). Nitrogen, energy balances, and amino acid levels were determined in lean (L) and obese (O) animals after 30 days on each diet. Lean HP (LHP) animals showed higher energy efficiency and amino acid catabolism but maintained similar amino acid accrual rates to the lean RD (LRD) group. Conversely, the lean LP (LLP) group showed a lower growth rate, which was compensated by a relative increase in fat mass. Furthermore, these animals showed greater efficiency accruing amino acids. Obesity increased amino acid catabolism as a result of massive amino acid intake; however, obese rats maintained protein accretion rates, which, in the OHP group, implied a normalization of energy efficiency. Nonetheless, the obese OLP group showed the same protein accretion pattern as in lean animals (LLP). In the base of our data, concluded that the Zucker rats accommodate their metabolism to support moderates increases in the content of protein in the diet, but do not adjust in the same way to a 50% decrease in content of protein, as shown by an index of growth reduced, both in lean and obese rats.
Resumo:
Tissue protein hypercatabolism (TPH) is a most important feature in cancer cachexia, particularly with regard to the skeletal muscle. The rat ascites hepatoma Yoshida AH-130 is a very suitable model system for studying the mechanisms involved in the processes that lead to tissue depletion, since it induces in the host a rapid and progressive muscle waste mainly due to TPH (Tessitore, L., G. Bonelli, and F. M. Baccino. 1987. Biochem. J. 241:153-159). Detectable plasma levels of tumor necrosis factor-alpha associated with marked perturbations in the hormonal homeostasis have been shown to concur in forcing metabolism into a catabolic setting (Tessitore, L., P. Costelli, and F. M. Baccino. 1993. Br. J. Cancer. 67:15-23). The present study was directed to investigate if beta 2-adrenergic agonists, which are known to favor skeletal muscle hypertrophy, could effectively antagonize the enhanced muscle protein breakdown in this cancer cachexia model. One such agent, i.e., clenbuterol, indeed largely prevented skeletal muscle waste in AH-130-bearing rats by restoring protein degradative rates close to control values. This normalization of protein breakdown rates was achieved through a decrease of the hyperactivation of the ATP-ubiquitin-dependent proteolytic pathway, as previously demonstrated in our laboratory (Llovera, M., C. García-Martínez, N. Agell, M. Marzábal, F. J. López-Soriano, and J. M. Argilés. 1994. FEBS (Fed. Eur. Biochem. Soc.) Lett. 338:311-318). By contrast, the drug did not exert any measurable effect on various parenchymal organs, nor did it modify the plasma level of corticosterone and insulin, which were increased and decreased, respectively, in the tumor hosts. The present data give new insights into the mechanisms by which clenbuterol exerts its preventive effect on muscle protein waste and seem to warrant the implementation of experimental protocols involving the use of clenbuterol or alike drugs in the treatment of pathological states involving TPH, particularly in skeletal muscle and heart, such as in the present model of cancer cachexia.
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
Peer-reviewed
Resumo:
Extensible Dependency Grammar (XDG; Debusmann, 2007) is a flexible, modular dependency grammarframework in which sentence analyses consist of multigraphs and processing takes the form of constraint satisfaction. This paper shows how XDGlends itself to grammar-driven machine translation and introduces the machinery necessary for synchronous XDG. Since the approach relies on a shared semantics, it resembles interlingua MT.It differs in that there are no separateanalysis and generation phases. Rather, translation consists of the simultaneousanalysis and generation of a single source-target sentence.
Resumo:
Many engineering problems that can be formulatedas constrained optimization problems result in solutionsgiven by a waterfilling structure; the classical example is thecapacity-achieving solution for a frequency-selective channel.For simple waterfilling solutions with a single waterlevel and asingle constraint (typically, a power constraint), some algorithmshave been proposed in the literature to compute the solutionsnumerically. However, some other optimization problems result insignificantly more complicated waterfilling solutions that includemultiple waterlevels and multiple constraints. For such cases, itmay still be possible to obtain practical algorithms to evaluate thesolutions numerically but only after a painstaking inspection ofthe specific waterfilling structure. In addition, a unified view ofthe different types of waterfilling solutions and the correspondingpractical algorithms is missing.The purpose of this paper is twofold. On the one hand, itoverviews the waterfilling results existing in the literature from aunified viewpoint. On the other hand, it bridges the gap betweena wide family of waterfilling solutions and their efficient implementationin practice; to be more precise, it provides a practicalalgorithm to evaluate numerically a general waterfilling solution,which includes the currently existing waterfilling solutions andothers that may possibly appear in future problems.
Resumo:
The well-known structure of an array combiner along with a maximum likelihood sequence estimator (MLSE) receiveris the basis for the derivation of a space-time processor presentinggood properties in terms of co-channel and intersymbol interferencerejection. The use of spatial diversity at the receiver front-endtogether with a scalar MLSE implies a joint design of the spatialcombiner and the impulse response for the sequence detector. Thisis faced using the MMSE criterion under the constraint that thedesired user signal power is not cancelled, yielding an impulse responsefor the sequence detector that is matched to the channel andcombiner response. The procedure maximizes the signal-to-noiseratio at the input of the detector and exhibits excellent performancein realistic multipath channels.
Resumo:
This paper presents a relational positioning methodology for flexibly and intuitively specifying offline programmed robot tasks, as well as for assisting the execution of teleoperated tasks demanding precise movements.In relational positioning, the movements of an object can be restricted totally or partially by specifying its allowed positions in terms of a set of geometric constraints. These allowed positions are found by means of a 3D sequential geometric constraint solver called PMF – Positioning Mobile with respect to Fixed. PMF exploits the fact that in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
El objetivo de este estudio fue replicar las estructuras de 5 y 6 factores de segundo orden del 16PF-5. Para la estructura de 5 factores se toma como referencia teórica la estructura obtenida por Russell y Karol (1995), y para la estructura de 6 factores (incluyendo un factor adicional de Razonamiento) la obtenida en muestras americanas por Cattell y Cattell (1995). Se utilizan tres procedimientos para el estudio de la replicabilidad, a) análisis factorial exploratorio, b) análisis de la estructura ortogonal Procrustes, y c) análisis de los índices de congruencia entre las tres matrices factoriales. Las matrices factoriales obtenidas en el presente estudio son similares a las informadas en los estudios de referencia, aunque la solución Procrustes se revela ligeramente más parecida. Los índices de congruencia en ge- neral son aceptables por lo que se concluye que el 16PF-5 demuestra buena replicabilidad.
Resumo:
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.
Resumo:
In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. The vast majority of the literature on the subject deals with affine normalization. We argue that more general normalizations are natural from a mathematical and physical point of view and work them out. The problem is approached using the language of renormalization-group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of its differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.