970 resultados para energetic constraint
Resumo:
We propose a compressive sensing algorithm that exploits geometric properties of images to recover images of high quality from few measurements. The image reconstruction is done by iterating the two following steps: 1) estimation of normal vectors of the image level curves, and 2) reconstruction of an image fitting the normal vectors, the compressed sensing measurements, and the sparsity constraint. The proposed technique can naturally extend to nonlocal operators and graphs to exploit the repetitive nature of textured images to recover fine detail structures. In both cases, the problem is reduced to a series of convex minimization problems that can be efficiently solved with a combination of variable splitting and augmented Lagrangian methods, leading to fast and easy-to-code algorithms. Extended experiments show a clear improvement over related state-of-the-art algorithms in the quality of the reconstructed images and the robustness of the proposed method to noise, different kind of images, and reduced measurements.
Resumo:
La principal fita dels ciclistes ha estat sempre l’entrenament per millorar les seves condicions i prestacions fisiològiques. Al llarg dels anys, el ciclisme com pràcticament tot l’esport s’ha anat modernitzant, no només tecnològicament. Això ha provocat l’aparició d’especialistes, corredors destinats a destacar, només en unes determinades condicions, per sobre els demés. Una d’aquestes condicions més restringides son les arribades massives, terreny dels anomenats esprinters, els quals brillen per sobre els demés degut a la seva potència, velocitat punta i arrancada. L’entrenament d’aquesta tipologia d’especialitat ha deixat entreveure varies ambigüitats i algunes problemàtiques de fonament teòric. L’esprint en el ciclisme es dona després d’un gran desgast de les reserves energètiques i de fatiga muscular. Per tant, entrenar-lo amb blocs de velocitat no té lògica. Tampoc es una opció viable el recurs que molts equips utilitzen: agafar corredors joves de la pista, i que la seva genètica (fibres ràpides) i les seves característiques de pistard1 facin la resta, perquè al pas dels anys perden aquesta exclusivitat. Aquest estudi es proposa buscar una manera de treballar i potenciar l’esprint del ciclista a partir de la força explosiva, garantint preservar les condicions aeròbiques per tal de que no perjudiqui per altra banda la seva resistència. Per tal d’aconseguir-ho, s’efectuaran uns tests. Un focalitzat en mesurar les prestacions en un esprint dels subjectes. L’altra es basarà en avaluar la força explosiva d’aquets, a partir de salts verticals. Un cop obtinguts els resultats de la primera tanda, els subjectes seran sotmesos a un entrenament combinat de sobrecàrrega, per tal de observar, en la segona tanda, si els resultats son significatius. Com a conclusió, es podria destacar el fet de millora en la majoria d’aspectes en tots els tests per part de tots els subjectes, i que probablement, hi ha una correlació significativa entre la força explosiva i la capacitat per esprintar, tot i que s’haurien de corroborar els resultats amb una mostra més gran.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Quantitative trait loci analysis of natural Arabidopsis thaliana accessions is increasingly exploited for gene isolation. However, to date this has mostly revealed deleterious mutations. Among them, a loss-of-function allele identified the root growth regulator BREVIS RADIX (BRX). Here we present evidence that BRX and the paralogous BRX-LIKE (BRXL) genes are under selective constraint in monocotyledons as well as dicotyledons. Unexpectedly, however, whereas none of the Arabidopsis orthologs except AtBRXL1 could complement brx null mutants when expressed constitutively, nearly all monocotyledon BRXLs tested could. Thus, BRXL proteins seem to be more diversified in dicotyledons than in monocotyledons. This functional diversification was correlated with accelerated rates of sequence divergence in the N-terminal regions. Population genetic analyses of 30 haplotypes are suggestive of an adaptive role of AtBRX and AtBRXL1. In two accessions, Lc-0 and Lov-5, seven amino acids are deleted in the variable region between the highly conserved C-terminal, so-called BRX domains. Genotyping of 42 additional accessions also found this deletion in Kz-1, Pu2-7, and Ws-0. In segregating recombinant inbred lines, the Lc-0 allele (AtBRX(Lc-0)) conferred significantly enhanced root growth. Moreover, when constitutively expressed in the same regulatory context, AtBRX(Lc-0) complemented brx mutants more efficiently than an allele without deletion. The same was observed for AtBRXL1, which compared with AtBRX carries a 13 amino acid deletion that encompasses the deletion found in AtBRX(Lc-0). Thus, the AtBRX(Lc-0) allele seems to contribute to natural variation in root growth vigor and provides a rare example of an experimentally confirmed, hyperactive allelic variant.
Resumo:
Congenital hemiparesis is one of the most frequent pediatric motor disorders. Upper limb rehabilitation of the hemiparetic child has considerably evolved during the last decade by the use of focal chemical denervation (intramuscular botulinum toxin) and the introduction of novel rehabilitation techniques such as constraint induced movement therapy or robotic reeducation.
Resumo:
We used high-resolution swath-bathymetry data to characterise the morphology of the abandoned subaqueous Sol de Riu delta lobe in the Ebro Delta, Western Mediterranean Sea. This study aims to assess the influence of an abandoned delta lobe on present-day coastal dynamics in a micro-tidal environment. Detailed mapping of the relict Sol de Riu lobe also showed a set of bedforms interpreted as footprints of human activities: seasonal V-shaped depressions on the middle shoreface due to boat anchoring and old trawling marks between 16 and 18 m water depth. Estimations of the mobility of bottom sediment showed that the shallowest shoreface (i.e. less than 7 m depth) is the most dynamic part of the relict lobe, while the middle shoreface experienced significant morphological changes since the lobe was abandoned. The deepest shoreface (i.e. water depth in excess of 15 m), which corresponds to the front of the lobe, is defined by a very small potential for morphological change. Simulations showed that while the relict lobe does not significantly affect the typical short period waves (Tp ≈4 s) in the study area, it does interfere with the most energetic wave conditions (Tp ≥ 7 s) acting as a shoal leading to the concentration of wave energy along the shoreline northwest of the lobe. The consequence of such modification of the high-energy wave propagation pattern by the relict lobe is an alteration of the wave-induced littoral sediment dynamics with respect to a situation without the lobe.
Resumo:
Recent multisensory research has emphasized the occurrence of early, low-level interactions in humans. As such, it is proving increasingly necessary to also consider the kinds of information likely extracted from the unisensory signals that are available at the time and location of these interaction effects. This review addresses current evidence regarding how the spatio-temporal brain dynamics of auditory information processing likely curtails the information content of multisensory interactions observable in humans at a given latency and within a given brain region. First, we consider the time course of signal propagation as a limitation on when auditory information (of any kind) can impact the responsiveness of a given brain region. Next, we overview the dual pathway model for the treatment of auditory spatial and object information ranging from rudimentary to complex environmental stimuli. These dual pathways are considered an intrinsic feature of auditory information processing, which are not only partially distinct in their associated brain networks, but also (and perhaps more importantly) manifest only after several tens of milliseconds of cortical signal processing. This architecture of auditory functioning would thus pose a constraint on when and in which brain regions specific spatial and object information are available for multisensory interactions. We then separately consider evidence regarding mechanisms and dynamics of spatial and object processing with a particular emphasis on when discriminations along either dimension are likely performed by specific brain regions. We conclude by discussing open issues and directions for future research.
Resumo:
We study energy relaxation in thermalized one-dimensional nonlinear arrays of the Fermi-Pasta-Ulam type. The ends of the thermalized systems are placed in contact with a zero-temperature reservoir via damping forces. Harmonic arrays relax by sequential phonon decay into the cold reservoir, the lower-frequency modes relaxing first. The relaxation pathway for purely anharmonic arrays involves the degradation of higher-energy nonlinear modes into lower-energy ones. The lowest-energy modes are absorbed by the cold reservoir, but a small amount of energy is persistently left behind in the array in the form of almost stationary low-frequency localized modes. Arrays with interactions that contain both a harmonic and an anharmonic contribution exhibit behavior that involves the interplay of phonon modes and breather modes. At long times relaxation is extremely slow due to the spontaneous appearance and persistence of energetic high-frequency stationary breathers. Breather behavior is further ascertained by explicitly injecting a localized excitation into the thermalized arrays and observing the relaxation behavior.
Resumo:
The influence of external factors on food preferences and choices is poorly understood. Knowing which and how food-external cues impact the sensory processing and cognitive valuation of food would provide a strong benefit toward a more integrative understanding of food intake behavior and potential means of interfering with deviant eating patterns to avoid detrimental health consequences for individuals in the long run. We investigated whether written labels with positive and negative (as opposed to 'neutral') valence differentially modulate the spatio-temporal brain dynamics in response to the subsequent viewing of high- and low-energetic food images. Electrical neuroimaging analyses were applied to visual evoked potentials (VEPs) from 20 normal-weight participants. VEPs and source estimations in response to high- and low- energy foods were differentially affected by the valence of preceding word labels over the ~260-300 ms post-stimulus period. These effects were only observed when high-energy foods were preceded by labels with positive valence. Neural sources in occipital as well as posterior, frontal, insular and cingulate regions were down-regulated. These findings favor cognitive-affective influences especially on the visual responses to high-energetic food cues, potentially indicating decreases in cognitive control and goal-adaptive behavior. Inverse correlations between insular activity and effectiveness in food classification further indicate that this down-regulation directly impacts food-related behavior.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
Innate immunity reacts to conserved bacterial molecules. The outermost lipopolysaccharide (LPS) of Gram-negative organisms is highly inflammatory. It activates responsive cells via specific CD14 and toll-like receptor-4 (TLR4) surface receptor and co-receptors. Gram-positive bacteria do not contain LPS, but carry surface teichoic acids, lipoteichoic acids and peptidoglycan instead. Among these, the thick peptidoglycan is the most conserved. It also triggers cytokine release via CD14, but uses the TLR2 co-receptor instead of TLR4 used by LPS. Moreover, whole peptidoglycan is 1000-fold less active than LPS in a weight-to-weight ratio. This suggests either that it is not important for inflammation, or that only part of it is reactive while the rest acts as ballast. Biochemical dissection of Staphylococcus aureus and Streptococcus pneumoniae cell walls indicates that the second assumption is correct. Long, soluble peptidoglycan chains (approximately 125 kDa) are poorly active. Hydrolysing these chains to their minimal unit (2 sugars and a stem peptide) completely abrogates inflammation. Enzymatic dissection of the pneumococcal wall generated a mixture of highly active fragments, constituted of trimeric stem peptides, and poorly active fragments, constituted of simple monomers and dimers or highly polymerized structures. Hence, the optimal constraint for activation might be 3 cross-linked stem peptides. The importance of structural constraint was demonstrated in additional studies. For example, replacing the first L-alanine in the stem peptide with a D-alanine totally abrogated inflammation in experimental meningitis. Likewise, modifying the D-alanine decorations of lipoteichoic acids with L-alanine, or deacylating them from their diacylglycerol lipid anchor also decreased the inflammatory response. Thus, although considered as a broad-spectrum pattern-recognizing system, innate immunity can detect very subtle differences in Gram-positive walls. This high specificity underlines the importance of using well-characterized microbial material in investigating the system.
Resumo:
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty.
Resumo:
This paper addresses the surprising lack of quality control on the analysis and selection on energy policies observable in the last decades. As an example, we discuss the delusional idea that it is possible to replace fossil energy with large scale ethanol production from agricultural crops. But if large scale ethanol production is not practical in energetic terms, why huge amount of money has been invested in it and is it still being invested? In order to answer this question we introduce two concepts useful to frame, in general terms, the predicament of quality control in science: (i) the concept of “granfalloons” proposed by K. Vonnegut (1963) flagging the danger of the formation of “crusades to save the world” void of real meaning. These granfalloons are often used by powerful lobbies to distort policy decisions; and (ii) the concept of Post-Normal science by S. Funtowicz and J. Ravetz (1990) indicating a standard predicament faced by science when producing information for governance. When mixing together uncertainty, multiple-scale and legitimate but contrasting views it becomes impossible to deal with complex issue using the conventional scientific approach based on reductionism. We finally discuss the implications of a different approach to the assessment of alternative energy sources by introducing the concept of Promethean technology.
Resumo:
BACKGROUND: Over the years, somatic care has become increasingly specialized. Furthermore, a rising number of patients requiring somatic care also present with a psychiatric comorbidity. As a consequence, the time and resources needed to care for these patients can interfere with the course of somatic treatment and influence the patient-caregiver relationship. In the light of these observations, the Liaison Psychiatry Unit at the University Hospital in Lausanne (CHUV) has educated its nursing staff in order to strengthen its action within the general care hospital. What has been developed is a reflexive approach through supervision of somatic staff, in order to improve the efficiency of liaison psychiatry interventions with the caregivers in charge of patients. The kind of supervision we have developed is the result of a real partnership with somatic staff. Besides, in order to better understand the complexity of interactions between the two systems involved, the patient's and the caregivers', we use several theoretical references in an integrative manner. PSYCHOANALYTICAL REFERENCE: The psychoanalytical model allows us to better understand the dynamics between the supervisor and the supervised group in order to contain and give meaning to the affects arising in the supervision space. "Containing function" and "transitional phenomena" refer to the experience in which emotions can find a space where they can be taken in and processed in a secure and supportive manner. These concepts, along with that of the "psychic envelope", were initially developed to explain the psychological development of the baby in its early interactions with its mother or its surrogate. In the field of supervision, they allow us to be aware of these complex phenomena and the diverse qualities to which a supervisor needs to resort, such as attention, support and incentive, in order to offer a secure environment. SYSTEMIC REFERENCE: A new perspective of the patient's complexity is revealed by the group's dynamics. The supervisor's attention is mainly focused on the work of affects. However, these are often buried under a defensive shell, serving as a temporary protection, which prevents the caregiver from recognizing his or her own emotions, thereby enhancing the difficulties in the relationship with the patient. Whenever the work of putting emotions into words fail, we use "sculpting", a technique derived from the systemic model. Through the use of this type of analogical language, affects can emerge without constraint or feelings of danger. Through "playing" in that "transitional space", new exchanges appear between group members and allow new behaviors to be conceived. In practice, we ask the supervisee who is presenting a complex situation, to design a spatial representation of his or her understanding of the situation, through the display of characters significant to the situation: the patient, somatic staff members, relatives of the patient, etc. In silence, the supervisee shapes the characters into postures and arranges them in the room. Each sculpted character is identified, named, and positioned, with his or her gaze being set in a specific direction. Finally the sculptor shapes him or herself in his or her own role. When the sculpture is complete and after a few moments of fixation, we ask participants to express themselves about their experience. By means of this physical representation, participants to the sculpture discover perceptions and feelings that were unknown up to then. Hence from this analogical representation a reflection and hypotheses of understanding can arise and be developed within the group. CONCLUSION: Through the use of the concepts of "containing function" and "transitional space" we position ourselves in the scope of the encounter and the dialog. Through the use of the systemic technique of "sculpting" we promote the process of understanding, rather than that of explaining, which would place us in the position of experts. The experience of these encounters has shown us that what we need to focus on is indeed what happens in this transitional space in terms of dynamics and process. The encounter and the sharing of competencies both allow a new understanding of the situation at hand, which has, of course, to be verified in the reality of the patient-caregiver relationship. It is often a source of adjustment for interpersonal skills to recover its containing function in order to enable caregiver to better respond to the patient's needs.
Resumo:
Este artículo, presenta una propuesta de ecoetiqueta que evalúa la calidad de los espacios de interés natural. Debido a la inexistencia de una ecoetiqueta de servicios de estas características, se han estudiado antecedentes de certificados ecológicos de servicios y sistemas de evaluación de espacios naturales y urbanos. A partir de este estudio, se han evaluado 110 indicadores preexistentes, de los cuales se han adaptado 59 indicadores, 29 de cumplimiento obligatorio y 30 recomendables, divididos en tres flujos: Flujo Humano, Flujo Natural y Flujo de Gestión, y 17 vectores; con los cuales se ha elaborado un sistema de evaluación adaptado a esta ecoetiqueta. Con la determinación del reglamento y las condiciones generales para la concesión de la propuesta de ecoetiqueta, se ha realizado una Prueba Piloto en la Vall d’Alinyà (Provincia de Lérida) centrada en el Flujo Humano, verificando de forma positiva la aplicación de la certificación en este espacio. Los resultados indican una adecuación de más del 90% de los indicadores seleccionados, mientras que se ha observado, principalmente, deficiencias en los sistemas hídricos y energéticos de la Vall d’Alinyà. Por ello, se han elaborado una serie de propuestas de mejora.