824 resultados para Hot-spots
Resumo:
The hypothesis that chromosomal fragile sites may be “weak links” that result in hot spots for cancer-specific chromosome rearrangements was supported by the discovery that numerous cancer cell homozygous deletions and a familial translocation map within the FHIT gene, which encompasses the common fragile site, FRA3B. Sequence analysis of 276 kb of the FRA3B/FHIT locus and 22 associated cancer cell deletion endpoints shows that this locus is a frequent target of homologous recombination between long interspersed nuclear element sequences resulting in FHIT gene internal deletions, probably as a result of carcinogen-induced damage at FRA3B fragile sites.
Resumo:
ATP-gated P2X2 receptors are widely expressed in neurons, but the cellular effects of receptor activation are unclear. We engineered functional green fluorescent protein (GFP)-tagged P2X2 receptors and expressed them in embryonic hippocampal neurons, and report an approach to determining functional and total receptor pool sizes in living cells. ATP application to dendrites caused receptor redistribution and the formation of varicose hot spots of higher P2X2-GFP receptor density. Redistribution in dendrites was accompanied by an activation-dependent enhancement of the ATP-evoked current. Substate-specific mutant T18A P2X2-GFP receptors showed no redistribution or activation-dependent enhancement of the ATP-evoked current. Thus fluorescent P2X2-GFP receptors function normally, can be quantified, and reveal the dynamics of P2X2 receptor distribution on the seconds time scale.
Resumo:
High affinity antibodies are generated in mice and humans by means of somatic hypermutation (SHM) of variable (V) regions of Ig genes. Mutations with rates of 10−5–10−3 per base pair per generation, about 106-fold above normal, are targeted primarily at V-region hot spots by unknown mechanisms. We have measured mRNA expression of DNA polymerases ι, η, and ζ by using cultured Burkitt's lymphoma (BL)2 cells. These cells exhibit 5–10-fold increases in heavy-chain V-region mutations targeted only predominantly to RGYW (R = A or G, Y = C or T, W = T or A) hot spots if costimulated with T cells and IgM crosslinking, the presumed in vivo requirements for SHM. An ∼4-fold increase pol ι mRNA occurs within 12 h when cocultured with T cells and surface IgM crosslinking. Induction of pols η and ζ occur with T cells, IgM crosslinking, or both stimuli. The fidelity of pol ι was measured at RGYW hot- and non-hot-spot sequences situated at nicks, gaps, and double-strand breaks. Pol ι formed T⋅G mispairs at a frequency of 10−2, consistent with SHM-generated C to T transitions, with a 3-fold increased error rate in hot- vs. non-hot-spot sequences for the single-nucleotide overhang. The T cell and IgM crosslinking-dependent induction of pol ι at 12 h may indicate an SHM “triggering” event has occurred. However, pols ι, η, and ζ are present under all conditions, suggesting that their presence is not sufficient to generate mutations because both T cell and IgM stimuli are required for SHM induction.
Resumo:
The existence of a code relating the set of possible sequences at a given position in a protein backbone to the local structure at that location is investigated. It is shown that only 73% of 4-C alpha structure fragments in a sample of 114 protein structures exhibit a preference for a particular set of sequences. The remaining structures can accommodate essentially any sequence. The structures that encode specific sequence distributions include the classical "secondary" structures, with the notable exception of planar (beta) bends. It is suggested that this has implications as to the mechanism of folding in proteins with extensive sheet/barrel structure. The possible role of structures that do not encode specific sequences as mutation hot spots is noted.
Resumo:
Here, we present bulk organic geochemical data from a spatial grid of surface samples from the western Barents Sea region. The results show that the distribution of organic carbon in surface sediments is predominantly controlled by input from land-derived terrigenous and in-situ produced marine organic matter. Inferred from various nitrogenous fractions and stable isotopes of bulk organic carbon we show that the spatial distribution of terrigenous organic carbon is independent of water depth, organic carbon mineralization and variable sedimentation rates. Instead, the pattern is predominantly controlled by sea ice-induced lateral transport and subsequent release in the Marginal Ice Zone (MIZ) as well as the distance to shore. Consistent with the observation of high vertical flux of particulate organic material in the MIZ, are amounts of marine organic carbon significantly enhanced in sediments below the winter ice margin. This is in accordance with modern observations suggesting that Arctic shelves with seasonal ice zones can be hot spots of vertical carbon export and thus a potential CO2 sink.
Resumo:
The Lapeyre-Triflo FURTIVA valve aims at combining the favorable hemodynamics of bioprosthetic heart valves with the durability of mechanical heart valves (MHVs). The pivoting region of MHVs is hemodynamically of special interest as it may be a region of high shear stresses, combined with areas of flow stagnation. Here, platelets can be activated and may form a thrombus which in the most severe case can compromise leaflet mobility. In this study we set up an experiment to replicate the pulsatile flow in the aortic root and to study the flow in the pivoting region under physiological hemodynamic conditions (CO = 4.5 L/min / CO = 3.0 L/min, f = 60 BPM). It was found that the flow velocity in the pivoting region could reach values close to that of the bulk flow during systole. At the onset of diastole the three valve leaflets closed in a very synchronous manner within an average closing time of 55 ms which is much slower than what has been measured for traditional bileaflet MHVs. Hot spots for elevated viscous shear stresses were found at the flanges of the housing and the tips of the leaflet ears. Systolic VSS was maximal during mid-systole and reached levels of up to 40 Pa.
Resumo:
From October 2014 to March 2015, I provided excavation oversight services at a property with substantial environmental concerns. The property in question is located near downtown Seattle and was formerly occupied by the Washington’s first coal gasification plant. The plant operated from 1888 to 1908 and produced coal gas for municipal use. A coal tar like substance with a characteristically high benzene concentration was a byproduct of the coal gasification process and heavily contaminated at or below the surface grade of the plant as shown in previous investigations on the property. Once the plant ceased operation in 1908 the property was left vacant until 1955 when the site was filled in and a service station was built on the property. The main goal of the excavation was not to achieve cleanup on the property, but to properly remove what contaminated soil was encountered during the redevelopment excavation. Areas of concern were identified prior to the commencement of the excavation and an estimation of the extent of contamination on the property was developed. “Hot spots” of contaminated soil associated with the fill placed after 1955 were identified as areas of concern. However, the primary contaminant plume below the property was likely sourced from the coal gasification plant, which operated at an approximate elevation of 20 feet. We planned to constrain the extents of the soil contamination below the property as the redevelopment excavation progressed. As the redevelopment excavation was advanced down to an elevation of approximately 20 feet, soil samples were collected to bound the extents of contamination in the upper portion of the site. The hot spots, known pockets of carcinogenic polycyclic aromatic hydrocarbons (cPAH) located above 20 feet elevation, were excavated as part of the redevelopment excavation. Once a hot spot was excavated, soil samples were collected from the north, south, east, west and bottom sidewalls of the hot spot excavation to check for remaining cPAH. Additionally, four underground storage tanks (USTs) associated with the service station were discovered and subsequently removed. Soil samples were also collected from the resulting UST excavation sidewalls to check for remaining petroleum hydrocarbons. Once the excavation reached its final excavation depth of 20 to 16 feet in elevation, bottom of excavation samples were collected on a 35 foot by 35 foot grid to test for concentrations of contaminants remaining onsite. Once the redevelopment excavation was complete, soils observed from borings drilled for either structural elements, geotechnical wells, or environmental wells were checked for any evidence of contamination using field screening techniques. Evidence of contamination was used to identify areas below the final excavation grade which had been impacted by the operation of the coal gasification plant. Samples collected from the excavation extents of hot spots and USTs show that it was unlikely that any contamination traveled from the post-1955 grade down to the pre-1955 grade. Additionally, the lack of benzene in the bottom of excavation samples suggests that a release from the coal gasification plant occurred below the redevelopment excavation final elevations of 20 to 16 feet. Qualitative data collected from borings for shoring elements and wells indicated that the spatial extent of the subsurface contaminant plume was different than initially estimated. Observations of spoils show that soil contamination extends further to the southwest and not as far to the east and north than originally estimated. Redefining the extent of the soil contamination beneath the property will allow further subsurface investigations to focus on collecting quantitative data in areas that still represent data gaps on the property, and passing over areas that have shown little signs of contamination. This information will help with the formation of a remediation plan should the need to clean up the site arise in the future.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A set of techniques referred to as circular statistics has been developed for the analysis of directional and orientational data. The unit of measure for such data is angular (usually in either degrees or radians), and the statistical distributions underlying the techniques are characterised by their cyclic nature-for example, angles of 359.9 degrees are considered close to angles of 0 degrees. In this paper, we assert that such approaches can be easily adapted to analyse time-of-day and time-of-week data, and in particular daily cycles in the numbers of incidents reported to the police. We begin the paper by describing circular statistics. We then discuss how these may be modified, and demonstrate the approach with some examples for reported incidents in the Cardiff area of Wales. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Benchmarking exercises have become increasingly popular within the sphere of regional policy making. However, most exercises are restricted to comparing regions within a particular continental bloc or nation.This article introduces the World Knowledge Competitiveness Index (WKCI), which is one of the very few benchmarking exercises established to compare regions across continents.The article discusses the formulation of the WKCI and analyzes the results of the most recent editions.The results suggest that there are significant variations in the knowledge-based regional economic development models at work across the globe. Further analysis also indicates that Silicon Valley, as the highest ranked WKCI region, holds a unique economic position among the globe’s leading regions. However, significant changes in the sources of regional competitiveness are evolving as a result of the emergence of new regional hot spots in Asia. It is concluded that benchmarking is imperative to the learning process of regional policy making.
Resumo:
The objective of the research carried out in this report was to observe the first ever in-situ sonochemical reaction in the NMR Spectrometer in the megahertz region of ultrasound. Several reactions were investigated as potential systems for a sonochemical reaction followed by NMR spectroscopy. The primary problem to resolve when applying ultrasound to a chemical reaction is that of heating. Ultrasound causes the liquid to move and produces 'hot spots' resulting in an increase in sample temperature. The problem was confronted by producing a device that would counteract this effect and so remove the need to account for heating. However, the design of the device limited the length of time during which it would function. Longer reaction times were required to enable observations to be carried out in the NMR spectrometer. The fIrst and most obvious reactions attempted were those of the well-known ultrasonic dosimeter. Such a reaction would, theoretically, enable the author to simultaneously observe a reaction and determine the exact power entering the system for direct comparison of results. Unfortunately, in order to monitor the reactions in the NMR spectrometer the reactant concentrations had to be signifIcantly increased, which resulted in a notable increase in reaction time, making the experiment too lengthy to follow in the time allocated. The Diels-Alder Reaction is probably one of the most highly investigated reaction systems in the field of chemistry and it was this to which the author turned her attention. Previous authors have carried out ultrasonic investigations, with considerable success, for the reaction of anthracene with maleic anhydride. It was this reaction in particular that was next attempted. The first ever sonochemically enhanced reaction using a frequency of ultrasound in the megahertz (MHz) region was successfully carried out as bench experiments. Due to the complexity of the component reactants the product would precipitate from the solution and because the reaction could only be monitored by its formation, it was not possible to observe the reaction in the NMR spectrometer. The solvolysis of 2-chloro-2-methylpropane was examined in various solvent systems; the most suitable of which was determined to be aqueous 2-methylpropan-2-ol. The experiment was successfully enhanced by the application of ultrasound and monitored in-situ in the NMR spectrometer. The increase in product formation of an ultrasonic reaction over that of a traditional thermal reaction occurred. A range of 1.4 to 2.9 fold improvement was noted, dependent upon the reaction conditions investigated. An investigation into the effect of sonication upon a large biological molecule, in this case aqueous lysozyme, was carried out. An easily observed effect upon the sample was noted but no explanation for the observed effects could be established.
Resumo:
Internally heated fluids are found across the nuclear fuel cycle. In certain situations the motion of the fluid is driven by the decay heat (i.e. corium melt pools in severe accidents, the shutdown of liquid metal reactors, molten salt and the passive control of light water reactors) as well as normal operation (i.e. intermediate waste storage and generation IV reactor designs). This can in the long-term affect reactor vessel integrity or lead to localized hot spots and accumulation of solid wastes that may prompt local increases in activity. Two approaches to the modeling of internally heated convection are presented here. These are based on numerical analysis using codes developed in-house and simulations using widely available computational fluid dynamics solvers. Open and closed fluid layers at around the transition between conduction and convection of various aspect ratios are considered. We determine optimum domain aspect ratio (1:7:7 up to 1:24:24 for open systems and 5:5:1, 1:10:10 and 1:20:20 for closed systems), mesh resolutions and turbulence models required to accurately and efficiently capture the convection structures that evolve when perturbing the conductive state of the fluid layer. Note that the open and closed fluid layers we study here are bounded by a conducting surface over an insulating surface. Conclusions will be drawn on the influence of the periodic boundary conditions on the flow patterns observed. We have also examined the stability of the nonlinear solutions that we found with the aim of identifying the bifurcation sequence of these solutions en route to turbulence.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
Seagrass meadows are highly productive habitats found along many of the world's coastline, providing important services that support the overall functioning of the coastal zone. The organic carbon that accumulates in seagrass meadows is derived not only from seagrass production but from the trapping of other particles, as the seagrass canopies facilitate sedimentation and reduce resuspension. Here we provide a comprehensive synthesis of the available data to obtain a better understanding of the relative contribution of seagrass and other possible sources of organic matter that accumulate in the sediments of seagrass meadows. The data set includes 219 paired analyses of the carbon isotopic composition of seagrass leaves and sediments from 207 seagrass sites at 88 locations worldwide. Using a three source mixing model and literature values for putative sources, we calculate that the average proportional contribution of seagrass to the surface sediment organic carbon pool is ∼50%. When using the best available estimates of carbon burial rates in seagrass meadows, our data indicate that between 41 and 66 gC m−2 yr−1 originates from seagrass production. Using our global average for allochthonous carbon trapped in seagrass sediments together with a recent estimate of global average net community production, we estimate that carbon burial in seagrass meadows is between 48 and 112 Tg yr−1, showing that seagrass meadows are natural hot spots for carbon sequestration.