954 resultados para Cyclic AMP Response Element Modulator


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In chick embryo fibroblasts, the mRNA for extracellular matrix protein tenascin-C is induced 2-fold by cyclic strain (10%, 0.3 Hz, 6 h). This response is attenuated by inhibiting Rho-dependent kinase (ROCK). The RhoA/ROCK signaling pathway is primarily involved in actin dynamics. Here, we demonstrate its crucial importance in regulating tenascin-C expression. Cyclic strain stimulated RhoA activation and induced fibroblast contraction. Chemical activators of RhoA synergistically enhanced the effects of cyclic strain on cell contractility. Interestingly, tenascin-C mRNA levels perfectly matched the extent of RhoA/ROCK-mediated actin contraction. First, RhoA activation by thrombin, lysophosphatidic acid, or colchicine induced tenascin-C mRNA to a similar extent as strain. Second, RhoA activating drugs in combination with cyclic strain caused a super-induction (4- to 5-fold) of tenascin-C mRNA, which was again suppressed by ROCK inhibition. Third, disruption of the actin cytoskeleton with latrunculin A abolished induction of tenascin-C mRNA by chemical RhoA activators in combination with cyclic strain. Lastly, we found that myosin II activity is required for tenascin-C induction by cyclic strain. We conclude that RhoA/ROCK-controlled actin contractility has a mechanosensory function in fibroblasts that correlates directly with tenascin-C gene expression. Previous RhoA/ROCK activation, either by chemical or mechanical signals, might render fibroblasts more sensitive to external tensile stress, e.g., during wound healing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tumor suppressor gene hypermethylated in cancer 1 (HIC1), located on human chromosome 17p13.3, is frequently silenced in cancer by epigenetic mechanisms. Hypermethylated in cancer 1 belongs to the bric à brac/poxviruses and zinc-finger family of transcription factors and acts by repressing target gene expression. It has been shown that enforced p53 expression leads to increased HIC1 mRNA, and recent data suggest that p53 and Hic1 cooperate in tumorigenesis. In order to elucidate the regulation of HIC1 expression, we have analysed the HIC1 promoter region for p53-dependent induction of gene expression. Using progressively truncated luciferase reporter gene constructs, we have identified a p53-responsive element (PRE) 500 bp upstream of the TATA-box containing promoter P0 of HIC1, which is sequence specifically bound by p53 in vitro as assessed by electrophoretic mobility shift assays. We demonstrate that this HIC1 p53-responsive element (HIC1.PRE) is necessary and sufficient to mediate induction of transcription by p53. This result is supported by the observation that abolishing endogenous wild-type p53 function prevents HIC1 mRNA induction in response to UV-induced DNA damage. Other members of the p53 family, notably TAp73beta and DeltaNp63alpha, can also act through this HIC1.PRE to induce transcription of HIC1, and finally, hypermethylation of the HIC1 promoter attenuates inducibility by p53.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the conclusions reached during the Congressionally mandated National Acid Precipitation Program (NAPAP) was that, compared to ozone and other stress factors, the direct effects of acidic deposition on forest health and productivity were likely to be relatively minor. However, the report also concluded “the possibility of long-term (several decades) adverse effects on some soils appears realistic” (Barnard et al. 1990). Possible mechanisms for these long-term effects include: (1) accelerated leaching of base cations from soils and foliage, (2) increased mobilization of aluminum (Al) and other metals such as manganese (Mn), (3) inhibition of soil biological processes, including organic matter decomposition, and (4) increased bioavailability of nitrogen (N).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A recent article in this journal (Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124) argued that more than half of published research findings in the medical literature are false. In this commentary, we examine the structure of that argument, and show that it has three basic components: 1)An assumption that the prior probability of most hypotheses explored in medical research is below 50%. 2)Dichotomization of P-values at the 0.05 level and introduction of a “bias” factor (produced by significance-seeking), the combination of which severely weakens the evidence provided by every design. 3)Use of Bayes theorem to show that, in the face of weak evidence, hypotheses with low prior probabilities cannot have posterior probabilities over 50%. Thus, the claim is based on a priori assumptions that most tested hypotheses are likely to be false, and then the inferential model used makes it impossible for evidence from any study to overcome this handicap. We focus largely on step (2), explaining how the combination of dichotomization and “bias” dilutes experimental evidence, and showing how this dilution leads inevitably to the stated conclusion. We also demonstrate a fallacy in another important component of the argument –that papers in “hot” fields are more likely to produce false findings. We agree with the paper’s conclusions and recommendations that many medical research findings are less definitive than readers suspect, that P-values are widely misinterpreted, that bias of various forms is widespread, that multiple approaches are needed to prevent the literature from being systematically biased and the need for more data on the prevalence of false claims. But calculating the unreliability of the medical research literature, in whole or in part, requires more empirical evidence and different inferential models than were used. The claim that “most research findings are false for most research designs and for most fields” must be considered as yet unproven.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lactococcus lactis IL1403, a lactic acid bacterium widely used for food fermentation, is often exposed to stress conditions. One such condition is exposure to copper, such as in cheese making in copper vats. Copper is an essential micronutrient in prokaryotes and eukaryotes but can be toxic if in excess. Thus, copper homeostatic mechanisms, consisting chiefly of copper transporters and their regulators, have evolved in all organisms to control cytoplasmic copper levels. Using proteomics to identify novel proteins involved in the response of L. lactis IL1403 to copper, cells were exposed to 200 muM copper sulfate for 45 min, followed by resolution of the cytoplasmic fraction by two-dimensional gel electrophoresis. One protein strongly induced by copper was LctO, which was shown to be a NAD-independent lactate oxidase. It catalyzed the conversion of lactate to pyruvate in vivo and in vitro. Copper, cadmium, and silver induced LctO, as shown by real-time quantitative PCR. A copper-regulatory element was identified in the 5' region of the lctO gene and shown to interact with the CopR regulator, encoded by the unlinked copRZA operon. Induction of LctO by copper represents a novel copper stress response, and we suggest that it serves in the scavenging of molecular oxygen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copper (Cu) and its alloys are used extensively in domestic and industrial applications. Cu is also an essential element in mammalian nutrition. Since both copper deficiency and copper excess produce adverse health effects, the dose-response curve is U-shaped, although the precise form has not yet been well characterized. Many animal and human studies were conducted on copper to provide a rich database from which data suitable for modeling the dose-response relationship for copper may be extracted. Possible dose-response modeling strategies are considered in this review, including those based on the benchmark dose and categorical regression. The usefulness of biologically based dose-response modeling techniques in understanding copper toxicity was difficult to assess at this time since the mechanisms underlying copper-induced toxicity have yet to be fully elucidated. A dose-response modeling strategy for copper toxicity was proposed associated with both deficiency and excess. This modeling strategy was applied to multiple studies of copper-induced toxicity, standardized with respect to severity of adverse health outcomes and selected on the basis of criteria reflecting the quality and relevance of individual studies. The use of a comprehensive database on copper-induced toxicity is essential for dose-response modeling since there is insufficient information in any single study to adequately characterize copper dose-response relationships. The dose-response modeling strategy envisioned here is designed to determine whether the existing toxicity data for copper excess or deficiency may be effectively utilized in defining the limits of the homeostatic range in humans and other species. By considering alternative techniques for determining a point of departure and low-dose extrapolation (including categorical regression, the benchmark dose, and identification of observed no-effect levels) this strategy will identify which techniques are most suitable for this purpose. This analysis also serves to identify areas in which additional data are needed to better define the characteristics of dose-response relationships for copper-induced toxicity in relation to excess or deficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spectrum sensing is currently one of the most challenging design problems in cognitive radio. A robust spectrum sensing technique is important in allowing implementation of a practical dynamic spectrum access in noisy and interference uncertain environments. In addition, it is desired to minimize the sensing time, while meeting the stringent cognitive radio application requirements. To cope with this challenge, cyclic spectrum sensing techniques have been proposed. However, such techniques require very high sampling rates in the wideband regime and thus are costly in hardware implementation and power consumption. In this thesis the concept of compressed sensing is applied to circumvent this problem by utilizing the sparsity of the two-dimensional cyclic spectrum. Compressive sampling is used to reduce the sampling rate and a recovery method is developed for re- constructing the sparse cyclic spectrum from the compressed samples. The reconstruction solution used, exploits the sparsity structure in the two-dimensional cyclic spectrum do-main which is different from conventional compressed sensing techniques for vector-form sparse signals. The entire wideband cyclic spectrum is reconstructed from sub-Nyquist-rate samples for simultaneous detection of multiple signal sources. After the cyclic spectrum recovery two methods are proposed to make spectral occupancy decisions from the recovered cyclic spectrum: a band-by-band multi-cycle detector which works for all modulation schemes, and a fast and simple thresholding method that works for Binary Phase Shift Keying (BPSK) signals only. In addition a method for recovering the power spectrum of stationary signals is developed as a special case. Simulation results demonstrate that the proposed spectrum sensing algorithms can significantly reduce sampling rate without sacrifcing performance. The robustness of the algorithms to the noise uncertainty of the wireless channel is also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard procedures for forecasting flood risk (Bulletin 17B) assume annual maximum flood (AMF) series are stationary, meaning the distribution of flood flows is not significantly affected by climatic trends/cycles, or anthropogenic activities within the watershed. Historical flood events are therefore considered representative of future flood occurrences, and the risk associated with a given flood magnitude is modeled as constant over time. However, in light of increasing evidence to the contrary, this assumption should be reconsidered, especially as the existence of nonstationarity in AMF series can have significant impacts on planning and management of water resources and relevant infrastructure. Research presented in this thesis quantifies the degree of nonstationarity evident in AMF series for unimpaired watersheds throughout the contiguous U.S., identifies meteorological, climatic, and anthropogenic causes of this nonstationarity, and proposes an extension of the Bulletin 17B methodology which yields forecasts of flood risk that reflect climatic influences on flood magnitude. To appropriately forecast flood risk, it is necessary to consider the driving causes of nonstationarity in AMF series. Herein, large-scale climate patterns—including El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO)—are identified as influencing factors on flood magnitude at numerous stations across the U.S. Strong relationships between flood magnitude and associated precipitation series were also observed for the majority of sites analyzed in the Upper Midwest and Northeastern regions of the U.S. Although relationships between flood magnitude and associated temperature series are not apparent, results do indicate that temperature is highly correlated with the timing of flood peaks. Despite consideration of watersheds classified as unimpaired, analyses also suggest that identified change-points in AMF series are due to dam construction, and other types of regulation and diversion. Although not explored herein, trends in AMF series are also likely to be partially explained by changes in land use and land cover over time. Results obtained herein suggest that improved forecasts of flood risk may be obtained using a simple modification of the Bulletin 17B framework, wherein the mean and standard deviation of the log-transformed flows are modeled as functions of climate indices associated with oceanic-atmospheric patterns (e.g. AMO, ENSO, NAO, and PDO) with lead times between 3 and 9 months. Herein, one-year ahead forecasts of the mean and standard deviation, and subsequently flood risk, are obtained by applying site specific multivariate regression models, which reflect the phase and intensity of a given climate pattern, as well as possible impacts of coupling of the climate cycles. These forecasts of flood risk are compared with forecasts derived using the existing Bulletin 17B model; large differences in the one-year ahead forecasts are observed in some locations. The increased knowledge of the inherent structure of AMF series and an improved understanding of physical and/or climatic causes of nonstationarity gained from this research should serve as insight for the formulation of a physical-casual based statistical model, incorporating both climatic variations and human impacts, for flood risk over longer planning horizons (e.g., 10-, 50, 100-years) necessary for water resources design, planning, and management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of advanced materials aimed at improving human life has been performed since time immemorial. Such studies have created everlasting and greatly revered monuments and have helped revolutionize transportation by ushering the age of lighter–than–air flying machines. Hence a study of the mechanical behavior of advanced materials can pave way for their use for mankind’s benefit. In this school of thought, the aim of this dissertation is to broadly perform two investigations. First, an efficient modeling approach is established to predict the elastic response of cellular materials with distributions of cell geometries. Cellular materials find important applications in structural engineering. The approach does not require complex and time-consuming computational techniques usually associated with modeling such materials. Unlike most current analytical techniques, the modeling approach directly accounts for the cellular material microstructure. The approach combines micropolar elasticity theory and elastic mixture theory to predict the elastic response of cellular materials. The modeling approach is applied to the two dimensional balsa wood material. Predicted properties are in good agreement with experimentally determined properties, which emphasizes the model’s potential to predict the elastic response of other cellular solids, such as open cell and closed cell foams. The second topic concerns intraneural ganglion cysts which are a set of medical conditions that result in denervation of the muscles innervated by the cystic nerve leading to pain and loss of function. Current treatment approaches only temporarily alleviate pain and denervation which, however, does not prevent cyst recurrence. Hence, a mechanistic understanding of the pathogenesis of intraneural ganglion cysts can help clinicians understand them better and therefore devise more effective treatment options. In this study, an analysis methodology using finite element analysis is established to investigate the pathogenesis of intraneural ganglion cysts. Using this methodology, the propagation of these cysts is analyzed in their most common site of occurrence in the human body i.e. the common peroneal nerve. Results obtained using finite element analysis show good correlation with clinical imaging patterns thereby validating the promise of the method to study cyst pathogenesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major deficiency in disaster management plans is the assumption that pre-disaster civil-society does not have the capacity to respond effectively during crises. Following from this assumption a dominant emergency management strategy is to replace weak civil-society organizations with specialized disaster organizations that are often either military or Para-military and seek to centralize decision-making. Many criticisms have been made of this approach, but few specifically addresses disasters in the developing world. Disasters in the developing world present unique problems not seen in the developed world because they often occur in the context of compromised governments, and marginalized populations. In this context it is often community members themselves who possess the greatest capacity to respond to disasters. This paper focuses on the capacity of community groups to respond to disaster in a small town in rural Guatemala. Key informant interviews and ethnographic observations are used to reconstruct the community response to the disaster instigated by Hurricane Stan (2005) in the municipality of Tectitán in the Huehuetenango department. The interviews were analyzed using techniques adapted from grounded theory to construct a narrative of the events, and identify themes in the community’s disaster behavior. These themes are used to critique the emergency management plans advocated by the Guatemalan National Coordination for the Reduction of Disasters (CONRED). This paper argues that CONRED uncritically adopts emergency management strategies that do not account for the local realities in communities throughout Guatemala. The response in Tectitán was characterized by the formation of new organizations, whose actions and leadership structure were derived from “normal” or routine life. It was found that pre-existing social networks were resilient and easily re-oriented meet the novel needs of a crisis. New or emergent groups that formed during the disaster utilized social capital accrued by routine collective behavior, and employed organizational strategies derived from “normal” community relations. Based on the effectiveness of this response CONRED could improve its emergency planning on the local-level by utilizing the pre-existing community organizations rather than insisting that new disaster-specific organizations be formed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Red pine (Pinus resinosa Ait.) plantations have been established in Michigan with expectations of mixed final product goals: pulpwood, boltwood and possibly sawlogs. The effects of alternative treatments on tree and stand attributes were examined in: the Atlantic Mine trial, thinned in spring 2006 with three alternatives: (1) every fifth row removal plus crown thinning, (2) every third row removal plus crown thinning and (3) every third row removal plus thinning from below; the Crane Lake trial, thinned in fall 2004 with two alternatives: (1) every third row removal and (2) every third row removal plus thinning from above; the Middle Branch East trial, thinned in fall 2004 with two alternatives: (1) every third row removal plus one in three remaining trees and (2) every third row removal plus one in five remaining trees. All trials included control plots where no thinning was applied. The trials were established in the field as a randomized complete block experiments, in which individual trees were measured in 3-4 fixed-area plots located within each treatment unit. Growth responses of diameter at breast height, height, live crown length, stand basal area and stand volume were examined along with their increments. The Tukey multiple comparison test was used to detect significant differences between treatments in their effect on tree growth response. The results showed that diameter increment increased with increasing thinning intensity and was significantly larger in thinned plots compared to unthinned. Treatments did not substantially affect average tree height increment. Stand basal area increment was significantly larger in the control plot only the year after the harvest. Volume increment was significantly larger in controls, but did not differ considerably among remaining treatments. However, the ratio of volume increment to standing volume was significantly smaller in unthinned plots compared to thinned. Since thinning treatments in all trials hardly ever differed significantly in their effect on stand growth response, mainly due to the relatively short time of the evaluation, heavier thinnings should be favored due to higher volume increment rates and shorter time needed to reach desirable diameters. Nevertheless, economic evaluation based on obtained results will be conducted in the future in order to make final decisions about the most profitable treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chapter 1 introduces the tools and mechanics necessary for this report. Basic definitions and topics of graph theory which pertain to the report and discussion of automorphic decompositions will be covered in brief detail. An automorphic decomposition D of a graph H by a graph G is a G-decomposition of H such that the intersection of graph (D) @H. H is called the automorhpic host, and G is the automorphic divisor. We seek to find classes of graphs that are automorphic divisors, specifically ones generated cyclically. Chapter 2 discusses the previous work done mainly by Beeler. It also discusses and gives in more detail examples of automorphic decompositions of graphs. Chapter 2 also discusses labelings and their direct relation to cyclic automorphic decompositions. We show basic classes of graphs, such as cycles, that are known to have certain labelings, and show that they also are automorphic divisors. In Chapter 3, we are concerned with 2-regular graphs, in particular rCm, r copies of the m-cycle. We seek to show that rCm has a ρ-labeling, and thus is an automorphic divisor for all r and m. we discuss methods including Skolem type difference sets to create cycle systems and their correlation to automorphic decompositions. In the Appendix, we give classes of graphs known to be graceful and our java code to generate ρ-labelings on rCm.