950 resultados para Function of locally varying complexity
Resumo:
We optimized the emission efficiency from a microcavity OLEDs consisting of widely used organic materials, N,N'-di(naphthalene-1-yl)-N,N'-diphenylbenzidine (NPB) as a hole transport layer and tris (8-hydroxyquinoline) (Alq(3)) as emitting and electron transporting layer. LiF/Al was considered as a cathode, while metallic Ag anode was used. TiO2 and Al2O3 layers were stacked on top of the cathode to alter the properties of the top mirror. The electroluminescence emission spectra, electric field distribution inside the device, carrier density, recombination rate and exciton density were calculated as a function of the position of the emission layer. The results show that for certain TiO2 and Al2O3 layer thicknesses, light output is enhanced as a result of the increase in both the reflectance and transmittance of the top mirror. Once the optimum structure has been determined, the microcavity OLED devices can be fabricated and characterized, and comparisons between experiments and theory can be made.
Resumo:
The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.
Resumo:
Modern advances in technology have led to more complex manufacturing processes whose success centres on the ability to control these processes with a very high level of accuracy. Plant complexity inevitably leads to poor models that exhibit a high degree of parametric or functional uncertainty. The situation becomes even more complex if the plant to be controlled is characterised by a multivalued function or even if it exhibits a number of modes of behaviour during its operation. Since an intelligent controller is expected to operate and guarantee the best performance where complexity and uncertainty coexist and interact, control engineers and theorists have recently developed new control techniques under the framework of intelligent control to enhance the performance of the controller for more complex and uncertain plants. These techniques are based on incorporating model uncertainty. The newly developed control algorithms for incorporating model uncertainty are proven to give more accurate control results under uncertain conditions. In this paper, we survey some approaches that appear to be promising for enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty.
Resumo:
Tree islands are an important structural component of many graminoid-dominated wetlands because they increase ecological complexity in the landscape. Tree island area has been drastically reduced with hydrologic modifications within the Everglades ecosystem, yet still little is known about the ecosystem ecology of Everglades tree islands. As part of an ongoing study to investigate the effects of hydrologic restoration on short hydroperiod marshes of the southern Everglades, we report an ecosystem characterization of seasonally flooded tree islands relative to locations described by variation in freshwater flow (i.e. locally enhanced freshwater flow by levee removal). We quantified: (1) forest structure, litterfall production, nutrient utilization, soil dynamics, and hydrologic properties of six tree islands and (2) soil and surface water physico-chemical properties of adjacent marshes. Tree islands efficiently utilized both phosphorus and nitrogen, but indices of nutrient-use efficiency indicated stronger P than N limitation. Tree islands were distinct in structure and biogeochemical properties from the surrounding marsh, maintaining higher organically bound P and N, but lower inorganic N. Annual variation resulting in increased hydroperiod and lower wet season water levels not only increased nitrogen use by tree species and decreased N:P values of the dominant plant species (Chrysobalanus icaco), but also increased soil pH and decreased soil temperature. When compared with other forested wetlands, these Everglades tree islands were among the most nutrient efficient, likely a function of nutrient immobilization in soils and the calcium carbonate bedrock. Tree islands of our study area are defined by: (1) unique biogeochemical properties when compared with adjacent short hydroperiod marshes and other forested wetlands and (2) an intricate relationship with marsh hydrology. As such, they may play an important and disproportionate role in nutrient and carbon cycling in Everglades wetlands. With the loss of tree islands that has occurred with the degradation of the Everglades system, these landscape processes may have been altered. With this baseline dataset, we have established a long-term ecosystem-scale experiment to follow the ecosystem trajectory of seasonally flooded tree islands in response to hydrologic restoration of the southern Everglades.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The Australian southern continental margin is the world’s largest site of cool-water carbonate deposition, and the Great Australian Bight is its largest sector. The Eyre Peninsula is fringed by coastal beaches with aeolianites and marks the eastern edge of the Great Australian Bight. Five shoreline transects of varying lengths spanned a 150km longitudinal distance and at each the intertidal, beach, dune and secondary dune environments were sampled, for a total of 18 samples. Sediments are a mixture of modern, relict, and Cenozoic carbonates, and quartz grains. Carbonate aeolianites on the western Eyre Peninsula are mostly composed of modern carbonate grains: predominantly molluscs (23-33%) and benthic foraminifera (10-26%), locally abundant coralline algae (3-28%), echinoids (2-22%), and bryozoans (2-14%). Cenozoic grain abundance ranges from 1-6% whereas relict grain abundance ranges from 0-17%. A southward increase in bryozoan particles correlates with a nutrient element abundance and decrease in temperature due to a large seasonal coastal upwelling system that drives 2-3 major upwelling events per year, bringing cold, nutrient rich, Sub-Antarctic Surface Water (<12°C) onto the shelf. In southern, mostly wind protected locations, the beach and dune sediment compositions are similar, indicating that wind energy has successfully carried all sediment components of the beach into the adjacent dunes. In northern, exposed locations, the composition is not the same everywhere, and trends indicate that relative wind energy has the ability to impact grain composition through preferential wind transport. Aeolianite composition is therefore a function of both upwelling and the degree of coastal exposure.
Resumo:
Sediment oxygen demand (SOD) can be a significant oxygen sink in various types of water bodies, particularly slow-moving waters with substantial organic sediment accumulation. In most settings where SOD is a concern, the prevailing hydraulic conditions are such that the impact of sediment resuspension on SOD is not considered. However, in the case of Bubbly Creek in Chicago, Illinois, the prevailing slack water conditions are interrupted by infrequent intervals of very high flow rates associated with pumped combined sewer overflow (CSO) during intense hydrologic events. These events can cause resuspension of the highly organic, nutrient-rich bottom sediments, resulting in precipitous drawdown of dissolved oxygen (DO) in the water column. While many past studies have addressed the dependence of SOD on near-bed velocity and bed shear stress prior to the point of sediment resuspension, there has been limited research that has attempted to characterize the complex and dynamic phenomenon of resuspended-sediment oxygen demand. To address this issue, a new in situ experimental apparatus referred to as the U of I Hydrodynamic SOD Sampler was designed to achieve a broad range of velocities and associated bed shear stresses. This allowed SOD to be analyzed across the spectrum of no sediment resuspension associated with low velocity/ bed shear stress through full sediment resuspension associated with high velocity / bed shear stress. The current study split SOD into two separate components: (1) SODNR is the sediment oxygen demand associated with non-resuspension conditions and is a surface sink calculated using traditional methods to yield a value with units (g/m2/day); and (2) SODR is the oxygen demand associated with resuspension conditions, which is a volumetric sink most accurately characterized using non-traditional methods and units that reflect suspension in the water column (mg/L/day). In the case of resuspension, the suspended sediment concentration was analyzed as a function of bed shear stress, and a formulation was developed to characterize SODR as a function of suspended sediment concentration in a form similar to first-order biochemical oxygen demand (BOD) kinetics with Monod DO term. The results obtained are intended to be implemented into a numerical model containing hydrodynamic, sediment transport, and water quality components to yield oxygen demand varying in both space and time for specific flow events. Such implementation will allow evaluation of proposed Bubbly Creek water quality improvement alternatives which take into account the impact of SOD under various flow conditions. Although the findings were based on experiments specific to the conditions in Bubbly Creek, the techniques and formulations developed in this study should be applicable to similar sites.
Resumo:
The p23 protein is a chaperone widely involved in protein homeostasis, well known as an Hsp90 co-chaperone since it also controls the Hsp90 chaperone cycle. Human p23 includes a β-sheet domain, responsible for interacting with Hsp90; and a charged C-terminal region whose function is not clear, but seems to be natively unfolded. p23 can undergo caspase-dependent proteolytic cleavage to form p19 (p231-142), which is involved in apoptosis, while p23 has anti-apoptotic activity. To better elucidate the function of the human p23 C-terminal region, we studied comparatively the full-length human p23 and three C-terminal truncation mutants: p23₁₋₁₁₇; p23₁₋₁₃₁ and p23₁₋₁₄₂. Our data indicate that p23 and p19 have distinct characteristics, whereas the other two truncations behave similarly, with some differences to p23 and p19. We found that part of the C-terminal region can fold in an α-helix conformation and slightly contributes to p23 thermal-stability, suggesting that the C-terminal interacts with the β-sheet domain. As a whole, our results suggest that the C-terminal region of p23 is critical for its structure-function relationship. A mechanism where the human p23 C-terminal region behaves as an activation/inhibition module for different p23 activities is proposed.
Resumo:
Being the commonest ocular disorder, dense cataracts disable fundoscopic examination and the diagnosis of retinal disorders, which dogs may be predisposed. The aim of this study was to compare the electroretinographic responses recorded according to the International Society for Clinical Electrophysiology of Vision human protocol to evaluate retinal function of diabetic and non diabetic dogs, both presenting mature or hypermature cataracts. Full-field electroretinogram was recorded from 66 dogs, with ages varying from 6 to 15 years old allocated into two groups: (1) CG, non diabetic cataractous dogs, and (2) DG, diabetic cataractous dogs. Mean peak-to-peak amplitude (microvolts) and b-wave implicit time (milliseconds) were determined for each of the five standard full-field ERG responses (rod response, maximal response, oscillatory potentials, single-flash cone response and 30 Hz flicker). Comparing CG to DG, ERGs recorded from diabetic dogs presented lower amplitude and prolonged b-wave implicit time in all ERG responses. Prolonged b-wave implicit time was statistically significant (p< 0.05) at 30 Hz flicker (24.0 ms versus 22.4 ms). These data suggests full-field ERG is capable to record sensible alterations, such as flicker's implicit time, being useful to investigate retinal dysfunction in diabetic dogs.
Resumo:
The Ca II triplet (CaT) feature in the near-infrared has been employed as a metallicity indicator for individual stars as well as integrated light of Galactic globular clusters (GCs) and galaxies with varying degrees of success, and sometimes puzzling results. Using the DEIMOS multi-object spectrograph on Keck we obtain a sample of 144 integrated light spectra of GCs around the brightest group galaxy NGC 1407 to test whether the CaT index can be used as ametallicity indicator for extragalactic GCs. Different sets of single stellar population models make different predictions for the behavior of the CaT as a function of metallicity. In this work, the metallicities of the GCs around NGC 1407 are obtained from CaT index values using an empirical conversion. The measured CaT/metallicity distributions show unexpected features, the most remarkable being that the brightest red and blue GCs have similar CaT values despite their large difference in mean color. Suggested explanations for this behavior in the NGC 1407 GC system are (1) the CaT may be affected by a population of hot blue stars, (2) the CaT may saturate earlier than predicted by the models, and/or (3) color may not trace metallicity linearly. Until these possibilities are understood, the use of the CaT as a metallicity indicator for the integrated spectra of extragalactic GCs will remain problematic.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
The highly expressed D7 protein family of mosquito saliva has previously been shown to act as an anti-inflammatory mediator by binding host biogenic amines and cysteinyl leukotrienes (CysLTs). In this study we demonstrate that AnSt-D7L1, a two-domain member of this group from Anopheles stephensi, retains the CysLT binding function seen in the homolog AeD7 from Aedes aegypti but has lost the ability to bind biogenic amines. Unlike any previously characterized members of the D7 family, AnSt-D7L1 has acquired the important function of binding thromboxane A(2) (TXA(2)) and its analogs with high affinity. When administered to tissue preparations, AnSt-D7L1 abrogated Leukotriene C(4) (LTC(4))-induced contraction of guinea pig ileum and contraction of rat aorta by the TXA(2) analog U46619. The protein also inhibited platelet aggregation induced by both collagen and U46619 when administered to stirred platelets. The crystal structure of AnSt-D7L1 contains two OBP-like domains and has a structure similar to AeD(7). In AnSt-D7L1, the binding pocket of the C-terminal domain has been rearranged relative to AeD7, making the protein unable to bind biogenic amines. Structures of the ligand complexes show that CysLTs and TXA(2) analogs both bind in the same hydrophobic pocket of the N-terminal domain. The TXA(2) analog U46619 is stabilized by hydrogen bonding interactions of the omega-5 hydroxyl group with the phenolic hydroxyl group of Tyr 52. LTC(4) and occupies a very similar position to LTE(4) in the previously determined structure of its complex with AeD7. As yet, it is not known what, if any, new function has been acquired by the rearranged C-terminal domain. This article presents, to our knowledge, the first structural characterization of a protein from mosquito saliva that inhibits collagen mediated platelet activation.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
Background: The archaeal exosome is formed by a hexameric RNase PH ring and three RNA binding subunits and has been shown to bind and degrade RNA in vitro. Despite extensive studies on the eukaryotic exosome and on the proteins interacting with this complex, little information is yet available on the identification and function of archaeal exosome regulatory factors. Results: Here, we show that the proteins PaSBDS and PaNip7, which bind preferentially to poly-A and AU-rich RNAs, respectively, affect the Pyrococcus abyssi exosome activity in vitro. PaSBDS inhibits slightly degradation of a poly-rA substrate, while PaNip7 strongly inhibits the degradation of poly-A and poly-AU by the exosome. The exosome inhibition by PaNip7 appears to depend at least partially on its interaction with RNA, since mutants of PaNip7 that no longer bind RNA, inhibit the exosome less strongly. We also show that FITC-labeled PaNip7 associates with the exosome in the absence of substrate RNA. Conclusions: Given the high structural homology between the archaeal and eukaryotic proteins, the effect of archaeal Nip7 and SBDS on the exosome provides a model for an evolutionarily conserved exosome control mechanism.
Resumo:
A bifilar Bi-2212 bulk coil with parallel shunt resistor was tested under fault current condition using a 3 MVA single-phase transformer in a 220 V-60 Hz line achieving fault current peak of 8 kA. The fault current tests are performed from steady state peak current of 200 A by applying controlled short circuits up to 8 kA varying the time period from one to six cycles. The test results show the function of the shunt resistor providing homogeneous quench behavior of the HTS coil besides its intrinsic stabilizing role. The limiting current ratio achieves a factor 4.2 during 5 cycles without any degradation.