918 resultados para Generation of test processes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collectively, research aimed to understand the regeneration of certain tissues has unveiled the existence of common key regulators. Knockout studies of the murine Nuclear Factor I-C (NFI-C) transcription factor revealed a misregulation of growth factor signaling, in particular that of transforming growth factor ß-1 (TGF-ßl), which led to alterations of skin wound healing and the growth of its appendages, suggesting it may be a general regulator of regenerative processes. We sought to investigate this further by determining whether NFI-C played a role in liver regeneration. Liver regeneration following two-thirds removal of the liver by partial hepatectomy (PH) is a well-established regenerative model whereby changes elicited in hepatocytes following injury lead to a rapid, phased proliferation. However, mechanisms controlling the action of liver proliferative factors such as transforming growth factor-ßl (TGF-ß1) and plasminogen activator inhibitor-1 (PAI-1) remain largely unknown. We show that the absence of NFI-C impaired hepatocyte proliferation due to an overexpression of PAI-1 and the subsequent suppression of urokinase plasminogen (uPA) activity and hepatocyte growth factor (HGF) signaling, a potent hepatocyte mitogen. This indicated that NFI-C first acts to promote hepatocyte proliferation at the onset of liver regeneration in wildtype mice. The subsequent transient down regulation of NFI-C, as can be explained by a self- regulatory feedback loop with TGF-ßl, may limit the number of hepatocytes entering the first wave of cell division and/or prevent late initiations of mitosis. Overall, we conclude that NFI-C acts as a regulator of the phased hepatocyte proliferation during liver regeneration. Taken together with NFI-C's actions in other in vivo models of (re)generation, it is plausible that NFI-C may be a general regulator of regenerative processes. - L'ensemble des recherches visant à comprendre la régénération de certains tissus a permis de mettre en évidence l'existence de régulateurs-clés communs. L'étude des souris, dépourvues du gène codant pour le facteur de transcription NFI-C (Nuclear Factor I-C), a montré des dérèglements dans la signalisation de certains facteurs croissance, en particulier du TGF-ßl (transforming growth factor-ßl), ce qui conduit à des altérations de la cicatrisation de la peau et de la croissance des poils et des dents chez ces souris, suggérant que NFI-C pourrait être un régulateur général du processus de régénération. Nous avons cherché à approfondir cette question en déterminant si NFI-C joue un rôle dans la régénération du foie. La régénération du foie, induite par une hépatectomie partielle correspondant à l'ablation des deux-tiers du foie, constitue un modèle de régénération bien établi dans lequel la lésion induite conduit à la prolifération rapide des hépatocytes de façon synchronisée. Cependant, les mécanismes contrôlant l'action de facteurs de prolifération du foie, comme le facteur de croissance TGF-ßl et l'inhibiteur de l'activateur du plasminogène PAI-1 (plasminogen activator inhibitor-1), restent encore très méconnus. Nous avons pu montrer que l'absence de NFI-C affecte la prolifération des hépatocytes, occasionnée par la surexpression de PAI-1 et par la subséquente suppression de l'activité de la protéine uPA (urokinase plasminogen) et de la signalisation du facteur de croissance des hépatocytes HGF (hepatocyte growth factor), un mitogène puissant des hépatocytes. Cela indique que NFI-C agit en premier lieu pour promouvoir la prolifération des hépatocytes au début de la régénération du foie chez les souris de type sauvage. La subséquente baisse transitoire de NFI-C, pouvant s'expliquer par une boucle rétroactive d'autorégulation avec le facteur TGF-ßl, pourrait limiter le nombre d'hépatocytes qui entrent dans la première vague de division cellulaire et/ou inhiber l'initiation de la mitose tardive. L'ensemble de ces résultats nous a permis de conclure que NFI-C agit comme un régulateur de la prolifération des hépatocytes synchrones au cours de la régénération du foie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MicroRNAs (miRNAs) are small non-coding RNAs that regulate a variety of biological processes. Cell-free miRNAs detected in blood plasma are used as specific and sensitive markers of physiological processes and some diseases. Circulating miRNAs are highly stable in body fluids, for example plasma. Therefore, profiles of circulating miRNAs have been investigated for potential use as novel, non-invasive anti-doping biomarkers. This review describes the biological mechanisms underlying the variation of circulating miRNAs, revealing that they have great potential as a new class of biomarker for detection of doping substances. The latest developments in extraction and profiling technology, and the technical design of experiments useful for anti-doping, are also discussed. Longitudinal measurements of circulating miRNAs in the context of the athlete biological passport are proposed as an efficient strategy for the use of these new markers. The review also emphasizes potential challenges for the translation of circulating miRNAs from research into practical anti-doping applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The processes of change implied in weight management remain unclear. The present study aimed to identify these processes by validating a questionnaire designed to assess processes of change (the P-Weight) in line with the transtheoretical model. The relationship of processes of change with stages of change and other external variables is also examined. Methods: Participants were 723 people from community and clinical settings in Barcelona. Their mean age was 32.07 (SD = 14.55) years; most of them were women (75.0%), and their mean BMI was 26.47 (SD = 8.52) kg/m2. They all completed the P-Weight and the stages of change questionnaire (SWeight), both applied to weight management, as well as two subscales from the Eating Disorders Inventory-2 and Eating Attitudes Test-40 questionnaires about the concern with dieting. Results: A 34-item version of the PWeight was obtained by means of a refinement process. The principal components analysis applied to half of the sample identified four processes of change. A confirmatory factor analysis was then carried out with the other half of the sample, revealing that the model of four freely correlated first-order factors showed the best fit (GFI = 0.988, AGFI = 0.986, NFI = 0.986, and SRMR = 0.0559). Corrected item-total correlations (0.322-0.865) and Cronbach"s alpha coefficients (0.781-0.960) were adequate. The relationship between the P-Weight and the S-Weight and the concern with dieting measures from other questionnaires supported the validity of the scale. Conclusion: The study identified processes of change involved in weight management and reports the adequate psychometric properties of the P-Weight. It also reveals the relationship between processes and stages of change and other external variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strenx® 960 MC is a direct quenched type of Ultra High Strength Steel (UHSS) with low carbon content. Although this material combines high strength and good ductility, it is highly sensitive towards fabrication processes. The presence of stress concentration due to structural discontinuity or notch will highlight the role of these fabrication effects on the deformation capacity of the material. Due to this, a series of tensile tests are done on both pure base material (BM) and when it has been subjected to Heat Input (HI) and Cold Forming (CF). The surface of the material was dressed by laser beam with a certain speed to study the effect of HI while the CF is done by bending the specimen to a certain angle prior to tensile test. The generated results illustrate the impact of these processes on the deformation capacity of the material, specially, when the material has HI experience due to welding or similar processes. In order to compare the results with those of numerical simulation, LS-DYNA explicit commercial package has been utilized. The generated results show an acceptable agreement between experimental and numerical simulation outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between the child's cogni tive development and neurological maturation has been of theoretical interest for many year s. Due to diff iculties such as the lack of sophisticated techniques for measur ing neurolog ical changes and a paucity of normative data, few studies exist that have attempted to correlate the two factors. Recent theory on intellectual development has proposed that neurological maturation may be a factor in the increase of short-term memory storage space. Improved technology has allowed reliable recordings of neurolog ical maturation.. In an attempt to correlate cogni tive development and neurological maturation, this study tested 3-and II-year old children. Fine motor and gross motor short-term memory tests were used to index cogni tive development. Somatosensory evoked potentials elici ted by median nerve stimulation were used to measure the time required for the sensation to pass along the nerve to specific points on the somatosensory pathway. Times were recorded for N14, N20, and P22 interpeak latencies. Maturation of the central nervous system (brain and spinal cord) and the peripheral nervous system (outside the brain and spinal cord) was indi~ated by the recorded times. Signif icant developmental di fferences occurred between 3-and ll-year-olds in memory levels, per ipheral conduction velocity and central conduction times. Linear regression analyses showed that as age increased, memory levels increased and central conduction times decreased. Between the ll-year-old groups, there were no significant differences in central or peripheral nervous system maturation between subjects who achieved a 12 plus score on the digit span test of the WISC-R and those who scored 7 or lower on the same test. Levels achieved on the experimental gross and fine motor short-term memory tests differed significantly within the ll-year-old group.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy production from biomass and the conservation of ecologically valuable grassland habitats are two important issues of agriculture today. The combination of a bioenergy production, which minimises environmental impacts and competition with food production for land with a conversion of semi-natural grasslands through new utilization alternatives for the biomass, led to the development of the IFBB process. Its basic principle is the separation of biomass into a liquid fraction (press fluid, PF) for the production of electric and thermal energy after anaerobic digestion to biogas and a solid fraction (press cake, PC) for the production of thermal energy through combustion. This study was undertaken to explore mass and energy flows as well as quality aspects of energy carriers within the IFBB process and determine their dependency on biomass-related and technical parameters. Two experiments were conducted, in which biomass from semi-natural grassland was conserved as silage and subjected to a hydrothermal conditioning and a subsequent mechanical dehydration with a screw press. Methane yield of the PF and the untreated silage was determined in anaerobic digestion experiments in batch fermenters at 37°C with a fermentation time of 13-15 and 27-35 days for the PF and the silage, respectively. Concentrations of dry matter (DM), ash, crude protein (CP), crude fibre (CF), ether extract (EE), neutral detergent fibre (NDF), acid detergent fibre (ADF), acid detergent ligning (ADL) and elements (K, Mg, Ca, Cl, N, S, P, C, H, N) were determined in the untreated biomass and the PC. Higher heating value (HHV) and ash softening temperature (AST) were calculated based on elemental concentration. Chemical composition of the PF and mass flows of all plant compounds into the PF were calculated. In the first experiment, biomass from five different semi-natural grassland swards (Arrhenaterion I and II, Caricion fuscae, Filipendulion ulmariae, Polygono-Trisetion) was harvested at one late sampling (19 July or 31 August) and ensiled. Each silage was subjected to three different temperature treatments (5°C, 60°C, 80°C) during hydrothermal conditioning. Based on observed methane yields and HHV as energy output parameters as well as literature-based and observed energy input parameters, energy and green house gas (GHG) balances were calculated for IFBB and two reference conversion processes, whole-crop digestion of untreated silage (WCD) and combustion of hay (CH). In the second experiment, biomass from one single semi-natural grassland sward (Arrhenaterion) was harvested at eight consecutive dates (27/04, 02/05, 09/05, 16/05, 24/05, 31/05, 11/06, 21/06) and ensiled. Each silage was subjected to six different treatments (no hydrothermal conditioning and hydrothermal conditioning at 10°C, 30°C, 50°C, 70°C, 90°C). Energy balance was calculated for IFBB and WCD. Multiple regression models were developed to predict mass flows, concentrations of elements in the PC, concentration of organic compounds in the PF and energy conversion efficiency of the IFBB process from temperature of hydrothermal conditioning as well as NDF and DM concentration in the silage. Results showed a relative reduction of ash and all elements detrimental for combustion in the PC compared to the untreated biomass of 20-90%. Reduction was highest for K and Cl and lowest for N. HHV of PC and untreated biomass were in a comparable range (17.8-19.5 MJ kg-1 DM), but AST of PC was higher (1156-1254°C). Methane yields of PF were higher compared to those of WCD when the biomass was harvested late (end of May and later) and in a comparable range when the biomass was harvested early and ranged from 332 to 458 LN kg-1 VS. Regarding energy and GHG balances, IFBB, with a net energy yield of 11.9-14.1 MWh ha-1, a conversion efficiency of 0.43-0.51, and GHG mitigation of 3.6-4.4 t CO2eq ha-1, performed better than WCD, but worse than CH. WCD produces thermal and electric energy with low efficiency, CH produces only thermal energy with a low quality solid fuel with high efficiency, IFBB produces thermal and electric energy with a solid fuel of high quality with medium efficiency. Regression models were able to predict target parameters with high accuracy (R2=0.70-0.99). The influence of increasing temperature of hydrothermal conditioning was an increase of mass flows, a decrease of element concentrations in the PC and a differing effect on energy conversion efficiency. The influence of increasing NDF concentration of the silage was a differing effect on mass flows, a decrease of element concentrations in the PC and an increase of energy conversion efficiency. The influence of increasing DM concentration of the silage was a decrease of mass flows, an increase of element concentrations in the PC and an increase of energy conversion efficiency. Based on the models an optimised IFBB process would be obtained with a medium temperature of hydrothermal conditioning (50°C), high NDF concentrations in the silage and medium DM concentrations of the silage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The delineation of Geomorphic Process Units (GPUs) aims to quantify past, current and future geomorphological processes and the sediment flux associated with them. Five GPUs have been identified for the Okstindan area of northern Norway and these were derived from the combination of Landsat satellite imagery (TM and ETM+) with stereo aerial photographs (used to construct a Digital Elevation Model) and ground survey. The Okstindan study area is sub-arctic and mountainous and is dominated by glacial and periglacial processes. The GPUs exclude the glacial system (some 37% of the study area) and hence they are focussed upon periglacial and colluvial processes. The identified GPUs are: 1. solifluction and rill erosion; 2. talus creep, slope wash and rill erosion; 3. accumulation of debris by rock and boulder fall; 4. rockwalls; and 5. stable ground with dissolved transport. The GPUs have been applied to a ‘test site’ within the study area in order to illustrate their potential for mapping the spatial distribution of geomorphological processes. The test site within the study area is a catchment which is representative of the range of geomorphological processes identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using data from the EISCAT (European Incoherent Scatter) VHF and CUTLASS (Co-operative UK Twin- Located Auroral Sounding System) HF radars, we study the formation of ionospheric polar cap patches and their relationship to the magnetopause reconnection pulses identified in the companion paper by Lockwood et al. (2005). It is shown that the poleward-moving, high-concentration plasma patches observed in the ionosphere by EISCAT on 23 November 1999, as reported by Davies et al. (2002), were often associated with corresponding reconnection rate pulses. However, not all such pulses generated a patch and only within a limited MLT range (11:00–12:00 MLT) did a patch result from a reconnection pulse. Three proposed mechanisms for the production of patches, and of the concentration minima that separate them, are analysed and evaluated: (1) concentration enhancement within the patches by cusp/cleft precipitation; (2) plasma depletion in the minima between the patches by fast plasma flows; and (3) intermittent injection of photoionisation-enhanced plasma into the polar cap. We devise a test to distinguish between the effects of these mechanisms. Some of the events repeat too frequently to apply the test. Others have sufficiently long repeat periods and mechanism (3) is shown to be the only explanation of three of the longer-lived patches seen on this day. However, effect (2) also appears to contribute to some events. We conclude that plasma concentration gradients on the edges of the larger patches arise mainly from local time variations in the subauroral plasma, via the mechanism proposed by Lockwood et al. (2000).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid development of biodiesel production technology has led to the generation of tremendous quantities of glycerol wastes, as the main by-product of the process. Stoichiometrically, it has been calculated that for every 100 kg of biodiesel, 10 kg of glycerol are produced. Based on the technology imposed by various biodiesel plants, glycerol wastes may contain numerous kinds of impurities such as methanol, salts, soaps, heavy metals and residual fatty acids. This fact often renders biodiesel-derived glycerol unprofitable for further purification. Therefore, the utilization of crude glycerol though biotechnological means represents a promising alternative for the effective management of this industrial waste. This review summarizes the effect of various impurities-contaminants that are found in biodiesel-derived crude glycerol upon its conversion by microbial strains in biotechnological processes. Insights are given concerning the technologies that are currently applied in biodiesel production, with emphasis to the impurities that are added in the composition of crude glycerol, through each step of the production process. Moreover, extensive discussion is made in relation with the impact of the nature of impurities upon the performances of prokaryotic and eukaryotic microorganisms, during crude glycerol bioconversions into a variety of high added-value metabolic products. Finally, aspects concerning ways of crude glycerol treatment for the removal of inhibitory contaminants as reported in the literature are given and comprehensively discussed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A few years ago, it was reported that ozone is produced in human atherosclerotic arteries, on the basis of the identification of 3 beta-hydroxy-5-oxo-5,6-secocholestan-6-al and 3 beta-hydroxy-5 beta-hydroxy-B-norcholestane-6 beta-carboxaldehyde (ChAld) as their 2,4-dinitrophenylhydrazones. The formation of endogenous ozone was attributed to water oxidation catalyzed by antibodies, with the formation of dihydrogen trioxide as a key intermediate. We now report that ChAld is also generated by the reaction of cholesterol with singlet molecular oxygen [O(2) ((1)Delta(g))] that is produced by photodynamic action or by the thermodecomposition of 1,4-dimethylnaphthalene endoperoxide, a defined pure chemical source of O(2) ((1)Delta(g)). On the basis of (18)O-labeled ChAld mass spectrometry, NMR, light emission measurements, and derivatization studies, we propose that the mechanism of ChAld generation involves the formation of the well-known cholesterol 5 alpha-hydroperoxide (5 alpha-OOH) (the major product of O(2) ((1)Delta(g))-oxidation of cholesterol) and/or a 1,2-dioxetane intermediate formed by O(2) ((1)Delta(g)) attack at the Delta(5) position. The Hock cleavage of 5 alpha-OOH (the major pathway) or unstable cholesterol dioxetane decomposition (a minor pathway, traces) gives a 5,6-secosterol intermediate, which undergoes intramolecular aldolization to yield ChAld. These results show clearly and unequivocally that ChAld is generated upon the reaction of cholesterol with O(2) ((1)Delta(g)) and raises questions about the role of ozone in biological processes.