952 resultados para RNA Dynamic Structure
Resumo:
Top predators can have large effects on community and population dynamics but we still know relatively little about their roles in ecosystems and which biotic and abiotic factors potentially affect their behavioral patterns. Understanding the roles played by top predators is a pressing issue because many top predator populations around the world are declining rapidly yet we do not fully understand what the consequences of their potential extirpation could be for ecosystem structure and function. In addition, individual behavioral specialization is commonplace across many taxa, but studies of its prevalence, causes, and consequences in top predator populations are lacking. In this dissertation I investigated the movement, feeding patterns, and drivers and implications of individual specialization in an American alligator (Alligator mississippiensis ) population inhabiting a dynamic subtropical estuary. I found that alligator movement and feeding behaviors in this population were largely regulated by a combination of biotic and abiotic factors that varied seasonally. I also found that the population consisted of individuals that displayed an extremely wide range of movement and feeding behaviors, indicating that individual specialization is potentially an important determinant of the varied roles of alligators in ecosystems. Ultimately, I found that assuming top predator populations consist of individuals that all behave in similar ways in terms of their feeding, movements, and potential roles in ecosystems is likely incorrect. As climate change and ecosystem restoration and conservation activities continue to affect top predator populations worldwide, individuals will likely respond in different and possibly unexpected ways.
Resumo:
Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.
Resumo:
This work consists basically in the elaboration of an Artificial Neural Network (ANN) in order to model the composites materials’ behavior when submitted to fatigue loadings. The proposal is to develop and present a mixed model, which associate an analytical equation (Adam Equation) to the structure of the ANN. Given that the composites often shows a similar behavior when subject to float loadings, this equation aims to establish a pre-defined comparison pattern for a generic material, so that the ANN fit the behavior of another composite material to that pattern. In this way, the ANN did not need to fully learn the behavior of a determined material, because the Adam Equation would do the big part of the job. This model was used in two different network architectures, modular and perceptron, with the aim of analyze it efficiency in distinct structures. Beyond the different architectures, it was analyzed the answers generated from two sets of different data – with three and two SN curves. This model was also compared to the specialized literature results, which use a conventional structure of ANN. The results consist in analyze and compare some characteristics like generalization capacity, robustness and the Goodman Diagrams, developed by the networks.
Resumo:
The authors would like to express their gratitude to their supporters. Drs Jim Cousins, S.R. Uma and Ken Gledhill facilitated this research by providing access to GeoNet seismic data and structural building information. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
The authors would like to express their gratitude to their supporters. Drs Jim Cousins, S.R. Uma and Ken Gledhill facilitated this research by providing access to GeoNet seismic data and structural building information. Piotr Omenzetter’s work within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen is supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.
Resumo:
Nucleic acids (DNA and RNA) play essential roles in the central dogma of biology for the storage and transfer of genetic information. The unique chemical and conformational structures of nucleic acids – the double helix composed of complementary Watson-Crick base pairs, provide the structural basis to carry out their biological functions. DNA double helix can dynamically accommodate Watson-Crick and Hoogsteen base-pairing, in which the purine base is flipped by ~180° degrees to adopt syn rather than anti conformation as in Watson-Crick base pairs. There is growing evidence that Hoogsteen base pairs play important roles in DNA replication, recognition, damage or mispair accommodation and repair. Here, we constructed a database for existing Hoogsteen base pairs in DNA duplexes by a structure-based survey from the Protein Data Bank, and structural analyses based on the resulted Hoogsteen structures revealed that Hoogsteen base pairs occur in a wide variety of biological contexts and can induce DNA kinking towards the major groove. As there were documented difficulties in modeling Hoogsteen or Watson-Crick by crystallography, we collaborated with the Richardsons’ lab and identified potential Hoogsteen base pairs that were mis-modeled as Watson-Crick base pairs which suggested that Hoogsteen can be more prevalent than it was thought to be. We developed solution NMR method combined with the site-specific isotope labeling to characterize the formation of, or conformational exchange with Hoogsteen base pairs in large DNA-protein complexes under solution conditions, in the absence of the crystal packing force. We showed that there are enhanced chemical exchange, potentially between Watson-Crick and Hoogsteen, at a sharp kink site in the complex formed by DNA and the Integration Host Factor protein. In stark contrast to B-form DNA, we found that Hoogsteen base pairs are strongly disfavored in A-form RNA duplex. Chemical modifications N1-methyl adenosine and N1-methyl guanosine that block Watson-Crick base-pairing, can be absorbed as Hoogsteen base pairs in DNA, but rather potently destabilized A-form RNA and caused helix melting. The intrinsic instability of Hoogsteen base pairs in A-form RNA endows the N1-methylation as a functioning post-transcriptional modification that was known to facilitate RNA folding, translation and potentially play roles in the epitranscriptome. On the other hand, the dynamic property of DNA that can accommodate Hoogsteen base pairs could be critical to maintaining the genome stability.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
Genetic decoding is not ‘frozen’ as was earlier thought, but dynamic. One facet of this is frameshifting that often results in synthesis of a C-terminal region encoded by a new frame. Ribosomal frameshifting is utilized for the synthesis of additional products, for regulatory purposes and for translational ‘correction’ of problem or ‘savior’ indels. Utilization for synthesis of additional products occurs prominently in the decoding of mobile chromosomal element and viral genomes. One class of regulatory frameshifting of stable chromosomal genes governs cellular polyamine levels from yeasts to humans. In many cases of productively utilized frameshifting, the proportion of ribosomes that frameshift at a shift-prone site is enhanced by specific nascent peptide or mRNA context features. Such mRNA signals, which can be 5′ or 3′ of the shift site or both, can act by pairing with ribosomal RNA or as stem loops or pseudoknots even with one component being 4 kb 3′ from the shift site. Transcriptional realignment at slippage-prone sequences also generates productively utilized products encoded trans-frame with respect to the genomic sequence. This too can be enhanced by nucleic acid structure. Together with dynamic codon redefinition, frameshifting is one of the forms of recoding that enriches gene expression.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
As part of the ultrafast charge dynamics initiated by high intensity laser irradiations of solid targets,high amplitude EM pulses propagate away from the interaction point and are transported along anystalks and wires attached to the target. The propagation of these high amplitude pulses along a thinwire connected to a laser irradiated target was diagnosed via the proton radiography technique,measuring a pulse duration of 20 ps and a pulse velocity close to the speed of light. The strongelectric field associated with the EM pulse can be exploited for controlling dynamically the protonbeams produced from a laser-driven source. Chromatic divergence control of broadband laser drivenprotons (upto 75% reduction in divergence of >5 MeV protons) was obtained by winding the supportingwire around the proton beam axis to create a helical coil structure. In addition to providingfocussing and energy selection, the technique has the potential to post-accelerate the transiting protonsby the longitudinal component of the curved electric field lines produced by the helical coil lens.
Resumo:
Iron is the main constituent of the core of rocky planets; therefore, understanding its phase diagram under extreme conditions is fundamental to model the planets’ evolution. Using dynamic compression by laser-driven shocks, pressure and temperature conditions close to what is found in these cores can be reached. However, it remains unclear whether phase boundaries determined at nanosecond timescales agree with static compression. Here we observed the presence of solid hexagonal close-packed iron at 170 GPa and 4,150 K, in a part of the iron phase diagram, where either a different solid structure or liquid iron has been proposed. This X-ray diffraction experiment confirms that laser compression is suitable for studying iron at conditions of deep planetary interiors difficult to achieve with static compression techniques.
Resumo:
In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
Soil is a complex heterogeneous system comprising of highly variable and dynamic micro-habitats that have significant impacts on the growth and activity of resident microbiota. A question addressed in this research is how soil structure affects the temporal dynamics and spatial distribution of bacteria. Using repacked microcosms, the effect of bulk-density, aggregate sizes and water content on growth and distribution of introduced Pseudomonas fluorescens and Bacillus subtilis bacteria was determined. Soil bulk-density and aggregate sizes were altered to manipulate the characteristics of the pore volume where bacteria reside and through which distribution of solutes and nutrients is controlled. X-ray CT was used to characterise the pore geometry of repacked soil microcosms. Soil porosity, connectivity and soil-pore interface area declined with increasing bulk-density. In samples that differ in pore geometry, its effect on growth and extent of spread of introduced bacteria was investigated. The growth rate of bacteria reduced with increasing bulk-density, consistent with a significant difference in pore geometry. To measure the ability of bacteria to spread thorough soil, placement experiments were developed. Bacteria were capable of spreading several cm’s through soil. The extent of spread of bacteria was faster and further in soil with larger and better connected pore volumes. To study the spatial distribution in detail, a methodology was developed where a combination of X-ray microtopography, to characterize the soil structure, and fluorescence microscopy, to visualize and quantify bacteria in soil sections was used. The influence of pore characteristics on distribution of bacteria was analysed at macro- and microscales. Soil porosity, connectivity and soil-pore interface influenced bacterial distribution only at the macroscale. The method developed was applied to investigate the effect of soil pore characteristics on the extent of spread of bacteria introduced locally towards a C source in soil. Soil-pore interface influenced spread of bacteria and colonization, therefore higher bacterial densities were found in soil with higher pore volumes. Therefore the results in this showed that pore geometry affects the growth and spread of bacteria in soil. The method developed showed showed how thin sectioning technique can be combined with 3D X-ray CT to visualize bacterial colonization of a 3D pore volume. This novel combination of methods is a significant step towards a full mechanistic understanding of microbial dynamics in structured soils.