835 resultados para Structural modeling of digital informational environments


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to relative ground movement, buried pipelines experience geotechnical loads. The imposed geotechnical loads may initiate pipeline deformations that affect system serviceability and integrity. Engineering guidelines (e.g., ALA, 2005; Honegger and Nyman, 2001) provide the technical framework to develop idealized structural models to analyze pipe‒soil interaction events and assess pipe mechanical response. The soil behavior is modeled using discrete springs that represent the geotechnical loads per unit pipe length developed during the interaction event. Soil forces are defined along three orthogonal directions (i.e., axial, lateral and vertical) to analyze the response of pipelines. Nonlinear load-displacement relationships of soil defined by a spring, is independent of neighboring spring elements. However, recent experimental and numerical studies demonstrate significant coupling effects during oblique (i.e., not along one of the orthogonal axes) pipe‒soil interaction events. In the present study, physical modeling using a geotechnical centrifuge was conducted to improve the current understanding of soil load coupling effects of buried pipes in loose and dense sand. A section of pipeline, at shallow burial depth, was translated through the soil at different oblique angles in the axial-lateral plane. The force exerted by the soil on pipe is critically examined to assess the significance of load coupling effects and establish a yield envelope. The displacements required to soil yield force are also examined to assess potential coupling in mobilization distance. A set of laboratory tests were conducted on the sand used for centrifuge modeling to find the stress-strain behavior of sand, which was used to examine the possible mechanisms of centrifuge model test. The yield envelope, deformation patterns, and interpreted failure mechanisms obtained from centrifuge modeling are compared with other physical modeling and numerical simulations available in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transcription factors (TFs) control the temporal and spatial expression of target genes by interacting with DNA in a sequence-specific manner. Recent advances in high throughput experiments that measure TF-DNA interactions in vitro and in vivo have facilitated the identification of DNA binding sites for thousands of TFs. However, it remains unclear how each individual TF achieves its specificity, especially in the case of paralogous TFs that recognize distinct target genomic sites despite sharing very similar DNA binding motifs. In my work, I used a combination of high throughput in vitro protein-DNA binding assays and machine-learning algorithms to characterize and model the binding specificity of 11 paralogous TFs from 4 distinct structural families. My work proves that even very closely related paralogous TFs, with indistinguishable DNA binding motifs, oftentimes exhibit differential binding specificity for their genomic target sites, especially for sites with moderate binding affinity. Importantly, the differences I identify in vitro and through computational modeling help explain, at least in part, the differential in vivo genomic targeting by paralogous TFs. Future work will focus on in vivo factors that might also be important for specificity differences between paralogous TFs, such as DNA methylation, interactions with protein cofactors, or the chromatin environment. In this larger context, my work emphasizes the importance of intrinsic DNA binding specificity in targeting of paralogous TFs to the genome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.

The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.

The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.

Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverse analysis for reactive transport of chlorides through concrete in the presence of electric field is presented. The model is solved using MATLAB’s built-in solvers “pdepe.m” and “ode15s.m”. The results from the model are compared with experimental measurements from accelerated migration test and a function representing the lack of fit is formed. This function is optimised with respect to varying amount of key parameters defining the model. Levenberg-Marquardt trust-region optimisation approach is employed. The paper presents a method by which the degree of inter-dependency between parameters and sensitivity (significance) of each parameter towards model predictions can be studied on models with or without clearly defined governing equations. Eigen value analysis of the Hessian matrix was employed to investigate and avoid over-parametrisation in inverse analysis. We investigated simultaneous fitting of parameters for diffusivity, chloride binding as defined by Freundlich isotherm (thermodynamic) and binding rate (kinetic parameter). Fitting of more than 2 parameters, simultaneously, demonstrates a high degree of parameter inter-dependency. This finding is significant as mathematical models for representing chloride transport rely on several parameters for each mode of transport (i.e., diffusivity, binding, etc.), which combined may lead to unreliable simultaneous estimation of parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les besoins toujours croissants en terme de transfert de données numériques poussent au développement de nouvelles technologies pour accroître la capacité des réseaux, notamment en ce qui concerne les réseaux de fibre optique. Parmi ces nouvelles technologies, le multiplexage spatial permet de multiplier la capacité des liens optiques actuels. Nous nous intéressons particulièrement à une forme de multiplexage spatial utilisant le moment cinétique orbital de la lumière comme base orthogonale pour séparer un certain nombre de canaux. Nous présentons d’abord les notions d’électromagnétisme et de physique nécessaires à la compréhension des développements ultérieurs. Les équations de Maxwell sont dérivées afin d’expliquer les modes scalaires et vectoriels de la fibre optique. Nous présentons également d’autres propriétés modales, soit la coupure des modes, et les indices de groupe et de dispersion. La notion de moment cinétique orbital est ensuite introduite, avec plus particulièrement ses applications dans le domaine des télécommunications. Dans une seconde partie, nous proposons la carte modale comme un outil pour aider au design des fibres optiques à quelques modes. Nous développons la solution vectorielle des équations de coupure des modes pour les fibres en anneau, puis nous généralisons ces équations pour tous les profils de fibres à trois couches. Enfin, nous donnons quelques exemples d’application de la carte modale. Dans la troisième partie, nous présentons des designs de fibres pour la transmission des modes avec un moment cinétique orbital. Les outils développés dans la seconde partie sont utilisés pour effectuer ces designs. Un premier design de fibre, caractérisé par un centre creux, est étudié et démontré. Puis un second design, une famille de fibres avec un profil en anneau, est étudié. Des mesures d’indice effectif et d’indice de groupe sont effectuées sur ces fibres. Les outils et les fibres développés auront permis une meilleure compréhension de la transmission dans la fibre optique des modes ayant un moment cinétique orbital. Nous espérons que ces avancements aideront à développer prochainement des systèmes de communications performants utilisant le multiplexage spatial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer game technology is poised to make a significant impact on the way our youngsters will learn. Our youngsters are ‘Digital Natives’, immersed in digital technologies, especially computer games. They expect to utilize these technologies in learning contexts. This expectation, and our response as educators, may change classroom practice and inform curriculum developments. This chapter approaches these issues ‘head on’. Starting from a review of the current educational issues, an evaluation of educational theory and instructional design principles, a new theoretical approach to the construction of “Educational Immersive Environments” (EIEs) is proposed. Elements of this approach are applied to development of an EIE to support Literacy Education in UK Primary Schools. An evaluation of a trial within a UK Primary School is discussed. Conclusions from both the theoretical development and the evaluation suggest how future teacher-practitioners may embrace both the technology and our approach to develop their own learning resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power of computer game technology is currently being harnessed to produce “serious games”. These “games” are targeted at the education and training marketplace, and employ various key game-engine components such as the graphics and physics engines to produce realistic “digital-world” simulations of the real “physical world”. Many approaches are driven by the technology and often lack a consideration of a firm pedagogical underpinning. The authors believe that an analysis and deployment of both the technological and pedagogical dimensions should occur together, with the pedagogical dimension providing the lead. This chapter explores the relationship between these two dimensions, and explores how “pedagogy may inform the use of technology”, how various learning theories may be mapped onto the use of the affordances of computer game engines. Autonomous and collaborative learning approaches are discussed. The design of a serious game is broken down into spatial and temporal elements. The spatial dimension is related to the theories of knowledge structures, especially “concept maps”. The temporal dimension is related to “experiential learning”, especially the approach of Kolb. The multi-player aspect of serious games is related to theories of “collaborative learning” which is broken down into a discussion of “discourse” versus “dialogue”. Several general guiding principles are explored, such as the use of “metaphor” (including metaphors of space, embodiment, systems thinking, the internet and emergence). The topological design of a serious game is also highlighted. The discussion of pedagogy is related to various serious games we have recently produced and researched, and is presented in the hope of informing the “serious game community”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitylation or covalent attachment of ubiquitin (Ub) to a variety of substrate proteins in cells is a versatile post-translational modification involved in the regulation of numerous cellular processes. The distinct messages that polyubiquitylation encodes are attributed to the multitude of conformations possible through attachment of ubiquitin monomers within a polyubiquitin chain via a specific lysine residue. Thus the hypothesis is that linkage defines polyubiquitin conformation which in turn determines specific recognition by cellular receptors. Ubiquitylation of membrane surface receptor proteins plays a very important role in regulating receptor-mediated endocytosis as well as endosomal sorting for lysosomal degradation. Epsin1 is an endocytic adaptor protein with three tandem UIMs (Ubiquitin Interacting Motifs) which are responsible for the highly specific interaction between epsin and ubiquitylated receptors. Epsin1 is also an oncogenic protein and its expression is upregulated in some types of cancer. Recently it has been shown that novel K11 and K63 mixed-linkage polyubiquitin chains serve as internalization signal for MHC I (Major Histocompatibility Complex I) molecule through their association with the tUIMs of epsin1. However the molecular mode of action and structural details of the interaction between polyubiquitin chains on receptors and tUIMs of epsin1 is yet to be determined. This information is crucial for the development of anticancer therapeutics targeting epsin1. The molecular basis for the linkage-specific recognition of K11 and K63 mixed-linkage polyubiquitin chains by the tandem UIMs of the endocytic adaptor protein epsin1 is investigated using a combination of NMR methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Turnip crinkle virus (TCV) and Pea enation mosaic virus (PEMV) are two positive (+)-strand RNA viruses that are used to investigate the regulation of translation and replication due to their small size and simple genomes. Both viruses contain cap-independent translation elements (CITEs) within their 3´ untranslated regions (UTRs) that fold into tRNA-shaped structures (TSS) according to nuclear magnetic resonance and small angle x-ray scattering analysis (TCV) and computational prediction (PEMV). Specifically, the TCV TSS can directly associate with ribosomes and participates in RNA-dependent RNA polymerase (RdRp) binding. The PEMV kissing-loop TSS (kl-TSS) can simultaneously bind to ribosomes and associate with the 5´ UTR of the viral genome. Mutational analysis and chemical structure probing methods provide great insight into the function and secondary structure of the two 3´ CITEs. However, lack of 3-D structural information has limited our understanding of their functional dynamics. Here, I report the folding dynamics for the TCV TSS using optical tweezers (OT), a single molecule technique. My study of the unfolding/folding pathways for the TCV TSS has provided an unexpected unfolding pathway, confirmed the presence of Ψ3 and hairpin elements, and suggested an interconnection between the hairpins and pseudoknots. In addition, this study has demonstrated the importance of the adjacent upstream adenylate-rich sequence for the formation of H4a/Ψ3 along with the contribution of magnesium to the stability of the TCV TSS. In my second project, I report on the structural analysis of the PEMV kl-TSS using NMR and SAXS. This study has re-confirmed the base-pair pattern for the PEMV kl-TSS and the proposed interaction of the PEMV kl-TSS with its interacting partner, hairpin 5H2. The molecular envelope of the kl-TSS built from SAXS analysis suggests the kl-TSS has two functional conformations, one of which has a different shape from the previously predicted tRNA-shaped form. Along with applying biophysical methods to study the structural folding dynamics of RNAs, I have also developed a technique that improves the production of large quantities of recombinant RNAs in vivo for NMR study. In this project, I report using the wild-type and mutant E.coli strains to produce cost-effective, site-specific labeled, recombinant RNAs. This technique was validated with four representative RNAs of different sizes and complexity to produce milligram amounts of RNAs. The benefit of using site-specific labeled RNAs made from E.coli was demonstrated with several NMR techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Twin-screw extrusion is used to compound fillers into a polymer matrix in order to improve the properties of the final product. The resultant properties of the composite are determined by the operating conditions used during extrusion processing. Changes in the operating conditions affect the physics of the melt flow, inducing unique composite properties. In the following work, the Residence Stress Distribution methodology has been applied to model both the stress behavior and the property response of a twin-screw compounding process as a function of the operating conditions. The compounding of a pigment into a polymer melt has been investigated to determine the effect of stress on the degree of mixing, which will affect the properties of the composite. In addition, the pharmaceutical properties resulting from the compounding of an active pharmaceutical ingredient are modeled as a function of the operating conditions, indicating the physical behavior inducing the property responses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although the Reax Force Field (ReaxFF) can be used to simulate the chemical behavior of carbon-based systems, the simulation settings required for accurate predictions have not been fully explored. Using the ReaxFF, molecular dynamics (MD) simulations are used to simulate the chemical behavior of pure carbon and hydrocarbon reactive gases that are involved in the formation of carbon structures such as graphite, buckyballs, amorphous carbon, and carbon nanotubes. It is determined that the maximum simulation time step that can be used in MD simulations with the ReaxFF is dependent on the simulated temperature and selected parameter set, as are the predicted reaction rates. It is also determined that different carbon-based reactive gases react at different rates, and that the predicted equilibrium structures are generally the same for the different ReaxFF parameter sets, except in the case of the predicted formation of large graphitic structures with the Chenoweth parameter set under specific conditions.