971 resultados para Library design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several new ligand platforms designed to support iron dinitrogen chemistry have been developed. First, we report Fe complexes of a tris(phosphino)alkyl (CPiPr3) ligand featuring an axial carbon donor intended to conceptually model the interstitial carbide atom of the nitrogenase iron-molybdenum cofactor (FeMoco). It is established that in this scaffold, the iron center binds dinitrogen trans to the Calkyl anchor in three structurally characterized oxidation states. Fe-Calkyl lengthening is observed upon reduction, reflective of significant ionic character in the Fe-Calkyl interaction. The anionic (CPiPr3)FeN2- species can be functionalized by a silyl electrophile to generate (CPiPr3)Fe-N2SiR3. This species also functions as a modest catalyst for the reduction of N2 to NH3. Next, we introduce a new binucleating ligand scaffold that supports an Fe(μ-SAr)Fe diiron subunit that coordinates dinitrogen (N2-Fe(μ-SAr)Fe-N2) across at least three oxidation states (FeIIFeII, FeIIFeI, and FeIFeI). Despite the sulfur-rich coordination environment of iron in FeMoco, synthetic examples of transition metal model complexes that bind N2 and also feature sulfur donor ligands remain scarce; these complexes thus represent an unusual series of low-valent diiron complexes featuring thiolate and dinitrogen ligands. The (N2-Fe(μ-SAr)Fe-N2) system undergoes reduction of the bound N2 to produce NH3 (~50% yield) and can efficiently catalyze the disproportionation of N2H4 to NH3 and N2. The present scaffold also supports dinitrogen binding concomitant with hydride as a co-ligand. Next, inspired by the importance of secondary-sphere interactions in many metalloenzymes, we present complexes of iron in two new ligand scaffolds ([SiPNMe3] and [SiPiPr2PNMe]) that incorporate hydrogen-bond acceptors (tertiary amines) which engage in interactions with nitrogenous substrates bound to the iron center (NH3 and N2H4). Cation binding is also facilitated in anionic Fe(0)-N2 complexes. While Fe-N2 complexes of a related ligand ([SiPiPr3]) lacking hydrogen-bond acceptors produce a substantial amount of ammonia when treated with acid and reductant, the presence of the pendant amines instead facilitates the formation of metal hydride species.

Additionally, we present the development and mechanistic study of copper-mediated and copper-catalyzed photoinduced C-N bond forming reactions. Irradiation of a copper-amido complex, ((m-tol)3P)2Cu(carbazolide), in the presence of aryl halides furnishes N-phenylcarbazole under mild conditions. The mechanism likely proceeds via single-electron transfer from an excited state of the copper complex to the aryl halide, generating an aryl radical. An array of experimental data are consistent with a radical intermediate, including a cyclization/stereochemical investigation and a reactivity study, providing the first substantial experimental support for the viability of a radical pathway for Ullmann C-N bond formation. The copper complex can also be used as a precatalyst for Ullmann C-N couplings. We also disclose further study of catalytic Calkyl-N couplings using a CuI precatalyst, and discuss the likely role of [Cu(carbazolide)2]- and [Cu(carbazolide)3]- species as intermediates in these reactions.

Finally, we report a series of four-coordinate, pseudotetrahedral P3FeII-X complexes supported by tris(phosphine)borate ([PhBP3FeR]-) and phosphiniminato X-type ligands (-N=PR'3) that in combination tune the spin-crossover behavior of the system. Low-coordinate transition metal complexes such as these that undergo reversible spin-crossover remain rare, and the spin equilibria of these systems have been studied in detail by a suite of spectroscopic techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

The physical phenomena which will ultimately limit the packing density of planar bipolar and MOS integrated circuits are examined. The maximum packing density is obtained by minimizing the supply voltage and the size of the devices. The minimum size of a bipolar transistor is determined by junction breakdown, punch-through and doping fluctuations. The minimum size of a MOS transistor is determined by gate oxide breakdown and drain-source punch-through. The packing density of fully active bipolar or static non-complementary MOS circuits becomes limited by power dissipation. The packing density of circuits which are not fully active such as read-only memories, becomes limited by the area occupied by the devices, and the frequency is limited by the circuit time constants and by metal migration. The packing density of fully active dynamic or complementary MOS circuits is limited by the area occupied by the devices, and the frequency is limited by power dissipation and metal migration. It is concluded that read-only memories will reach approximately the same performance and packing density with MOS and bipolar technologies, while fully active circuits will reach the highest levels of integration with dynamic MOS or complementary MOS technologies.

Part II

Because the Schottky diode is a one-carrier device, it has both advantages and disadvantages with respect to the junction diode which is a two-carrier device. The advantage is that there are practically no excess minority carriers which must be swept out before the diode blocks current in the reverse direction, i.e. a much faster recovery time. The disadvantage of the Schottky diode is that for a high voltage device it is not possible to use conductivity modulation as in the p i n diode; since charge carriers are of one sign, no charge cancellation can occur and current becomes space charge limited. The Schottky diode design is developed in Section 2 and the characteristics of an optimally designed silicon Schottky diode are summarized in Fig. 9. Design criteria and quantitative comparison of junction and Schottky diodes is given in Table 1 and Fig. 10. Although somewhat approximate, the treatment allows a systematic quantitative comparison of the devices for any given application.

Part III

We interpret measurements of permittivity of perovskite strontium titanate as a function of orientation, temperature, electric field and frequency performed by Dr. Richard Neville. The free energy of the crystal is calculated as a function of polarization. The Curie-Weiss law and the LST relation are verified. A generalized LST relation is used to calculate the permittivity of strontium titanate from zero to optic frequencies. Two active optic modes are important. The lower frequency mode is attributed mainly to motion of the strontium ions with respect to the rest of the lattice, while the higher frequency active mode is attributed to motion of the titanium ions with respect to the oxygen lattice. An anomalous resonance which multi-domain strontium titanate crystals exhibit below 65°K is described and a plausible mechanism which explains the phenomenon is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: To ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. Copyright: © 2015 Bildosola et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The findings are presented of a search conducted on traditional fishing gear design and construction using the ASFA database (1971-90) and the ICLARM Library and professional staff collections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The software package Dymola, which implements the new, vendor-independent standard modelling language Modelica, exemplifies the emerging generation of object-oriented modelling and simulation tools. This paper shows how, in addition to its simulation capabilities, it may be used as an embodiment design tool, to size automatically a design assembled from a library of generic parametric components. The example used is a miniature model aircraft diesel engine. To this end, the component classes contain extra algebraic equations calculating the overload factor (or its reciprocal, the safety factor) for all the different modes of failure, such as buckling or tensile yield. Thus the simulation results contain the maximum overload or minimum safety factor for each failure mode along with the critical instant and the device state at which it occurs. The Dymola "Initial Conditions Calculation" function, controlled by a simple software script, may then be used to perform automatic component sizing. Each component is minimised in mass, subject to a chosen safety factor against failure, over a given operating cycle. Whilst the example is in the realm of mechanical design, it must be emphasised that the approach is equally applicable to the electrical or mechatronic domains, indeed to any design problem requiring numerical constraint satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Program design is an area of programming that can benefit significantly from machine-mediated assistance. A proposed tool, called the Design Apprentice (DA), can assist a programmer in the detailed design of programs. The DA supports software reuse through a library of commonly-used algorithmic fragments, or cliches, that codifies standard programming. The cliche library enables the programmer to describe the design of a program concisely. The DA can detect some kinds of inconsistencies and incompleteness in program descriptions. It automates detailed design by automatically selecting appropriate algorithms and data structures. It supports the evolution of program designs by keeping explicit dependencies between the design decisions made. These capabilities of the DA are underlaid bya model of programming, called programming by successive elaboration, which mimics the way programmers interact. Programming by successive elaboration is characterized by the use of breadth-first exposition of layered program descriptions and the successive modifications of descriptions. A scenario is presented to illustrate the concept of the DA. Technques for automating the detailed design process are described. A framework is given in which designs are incrementally augmented and modified by a succession of design steps. A library of cliches and a suite of design steps needed to support the scenario are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a program called SketchIT capable of producing multiple families of designs from a single sketch. The program is given a rough sketch (drawn using line segments for part faces and icons for springs and kinematic joints) and a description of the desired behavior. The sketch is "rough" in the sense that taken literally, it may not work. From this single, perhaps flawed sketch and the behavior description, the program produces an entire family of working designs. The program also produces design variants, each of which is itself a family of designs. SketchIT represents each family of designs with a "behavior ensuring parametric model" (BEP-Model), a parametric model augmented with a set of constraints that ensure the geometry provides the desired behavior. The construction of the BEP-Model from the sketch and behavior description is the primary task and source of difficulty in this undertaking. SketchIT begins by abstracting the sketch to produce a qualitative configuration space (qc-space) which it then uses as its primary representation of behavior. SketchIT modifies this initial qc-space until qualitative simulation verifies that it produces the desired behavior. SketchIT's task is then to find geometries that implement this qc-space. It does this using a library of qc-space fragments. Each fragment is a piece of parametric geometry with a set of constraints that ensure the geometry implements a specific kind of boundary (qcs-curve) in qc-space. SketchIT assembles the fragments to produce the BEP-Model. SketchIT produces design variants by mapping the qc-space to multiple implementations, and by transforming rotating parts to translating parts and vice versa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urquhart, C., Durbin, J. & Spink, S. (2004). Training needs analysis of healthcare library staff, undertaken for South Yorkshire Workforce Development Confederation. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: South Yorkshire WDC (NHS)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tedd, L.A. (2006).Use of library and information science journals by Master?s students in their dissertations: experiences at the University of Wales Aberystwyth. Aslib Proceedings: New Information Perspectives, 58(6), 570-581.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urquhart,C., Spink, S., Thomas, R. & Weightman, A. (2007). Developing a toolkit for assessing the impact of health library services on patient care. Report to LKDN (Libraries and Knowledge Development Network). Aberystwyth: Department of Information Studies, Aberystwyth University. Sponsorship: Libraries and Knowledge Development Network/ NHS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes work carried out on the design of new routes to a range of bisindolylmaleimide and indolo[2,3-a]carbazole analogs, and investigation of their potential as successful anti-cancer agents. Following initial investigation of classical routes to indolo[2,3-a]pyrrolo[3,4-c]carbazole aglycons, a new strategy employing base-mediated condensation of thiourea and guanidine with a bisindolyl β-ketoester intermediate afforded novel 5,6-bisindolylpyrimidin-4(3H)-ones in moderate yields. Chemical diversity within this H-bonding scaffold was then studied by substitution with a panel of biologically relevant electrophiles, and by reductive desulfurisation. Optimisation of difficult heterogeneous literature conditions for oxidative desulfurisation of thiouracils was also accomplished, enabling a mild route to a novel 5,6-bisindolyluracil pharmacophore to be developed within this work. The oxidative cyclisation of selected acyclic bisindolyl systems to form a new planar class of indolo[2,3-a]pyrimido[5,4-c]carbazoles was also investigated. Successful conditions for this transformation, as well as the limitations currently prevailing for this approach are discussed. Synthesis of 3,4-bisindolyl-5-aminopyrazole as a potential isostere of bisindolylmaleimide agents was encountered, along with a comprehensive derivatisation study, in order to probe the chemical space for potential protein backbone H-bonding interactions. Synthesis of a related 3,4-arylindolyl-5-aminopyrazole series was also undertaken, based on identification of potent kinase inhibition within a closely related heterocyclic template. Following synthesis of approximately 50 novel compounds with a diversity of H-bonding enzyme-interacting potential within these classes, biological studies confirmed that significant topo II inhibition was present for 9 lead compounds, in previously unseen pyrazolo[1,5-a]pyrimidine, indolo[2,3-c]carbazole and branched S,N-disubstituted thiouracil derivative series. NCI-60 cancer cell line growth inhibition data for 6 representative compounds also revealed interesting selectivity differences between each compound class, while a new pyrimido[5,4-c]carbazole agent strongly inhibited cancer cell division at 10 µM, with appreciable cytotoxic activity observed across several tumour types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search result provided by existing digital library and web search systems typically comprises only a prioritised list of possible publications or web pages that meet the search criteria, possibly with excerpts and possibly with search terms highlighted. The research in progress reported in this poster contributes to a larger research effort to provide a readable summary of search results that synthesise relevant publications or web pages to provide results that meet four C’s: comprehensive, concise, coherent, and correct, as a more useful alternative to un-synthesised result lists. The scope of this research is limited to searching for and synthesising Design Science Research (DSR) publications that present the results of DSR, as an example problem domain.