924 resultados para Specification
Resumo:
Table of Contents
1 | Introduction | 1 |
1.1 | What is an Adiabatic Shear Band? | 1 |
1.2 | The Importance of Adiabatic Shear Bands | 6 |
1.3 | Where Adiabatic Shear Bands Occur | 10 |
1.4 | Historical Aspects of Shear Bands | 11 |
1.5 | Adiabatic Shear Bands and Fracture Maps | 14 |
1.6 | Scope of the Book | 20 |
2 | Characteristic Aspects of Adiabatic Shear Bands | 24 |
2.1 | General Features | 24 |
2.2 | Deformed Bands | 27 |
2.3 | Transformed Bands | 28 |
2.4 | Variables Relevant to Adiabatic Shear Banding | 35 |
2.5 | Adiabatic Shear Bands in Non-Metals | 44 |
3 | Fracture and Damage Related to Adiabatic Shear Bands | 54 |
3.1 | Adiabatic Shear Band Induced Fracture | 54 |
3.2 | Microscopic Damage in Adiabatic Shear Bands | 57 |
3.3 | Metallurgical Implications | 69 |
3.4 | Effects of Stress State | 73 |
4 | Testing Methods | 76 |
4.1 | General Requirements and Remarks | 76 |
4.2 | Dynamic Torsion Tests | 80 |
4.3 | Dynamic Compression Tests | 91 |
4.4 | Contained Cylinder Tests | 95 |
4.5 | Transient Measurements | 98 |
5 | Constitutive Equations | 104 |
5.1 | Effect of Strain Rate on Stress-Strain Behaviour | 104 |
5.2 | Strain-Rate History Effects | 110 |
5.3 | Effect of Temperature on Stress-Strain Behaviour | 114 |
5.4 | Constitutive Equations for Non-Metals | 124 |
6 | Occurrence of Adiabatic Shear Bands | 125 |
6.1 | Empirical Criteria | 125 |
6.2 | One-Dimensional Equations and Linear Instability Analysis | 134 |
6.3 | Localization Analysis | 140 |
6.4 | Experimental Verification | 146 |
7 | Formation and Evolution of Shear Bands | 155 |
7.1 | Post-Instability Phenomena | 156 |
7.2 | Scaling and Approximations | 162 |
7.3 | Wave Trapping and Viscous Dissipation | 167 |
7.4 | The Intermediate Stage and the Formation of Adiabatic Shear Bands | 171 |
7.5 | Late Stage Behaviour and Post-Mortem Morphology | 179 |
7.6 | Adiabatic Shear Bands in Multi-Dimensional Stress States | 187 |
8 | Numerical Studies of Adiabatic Shear Bands | 194 |
8.1 | Objects, Problems and Techniques Involved in Numerical Simulations | 194 |
8.2 | One-Dimensional Simulation of Adiabatic Shear Banding | 199 |
8.3 | Simulation with Adaptive Finite Element Methods | 213 |
8.4 | Adiabatic Shear Bands in the Plane Strain Stress State | 218 |
9 | Selected Topics in Impact Dynamics | 229 |
9.1 | Planar Impact | 230 |
9.2 | Fragmentation | 237 |
9.3 | Penetration | 244 |
9.4 | Erosion | 255 |
9.5 | Ignition of Explosives | 261 |
9.6 | Explosive Welding | 268 |
10 | Selected Topics in Metalworking | 273 |
10.1 | Classification of Processes | 273 |
10.2 | Upsetting | 276 |
10.3 | Metalcutting | 286 |
10.4 | Blanking | 293 |
Appendices | 297 | |
A | Quick Reference | 298 |
B | Specific Heat and Thermal Conductivity | 301 |
C | Thermal Softening and Related Temperature Dependence | 312 |
D | Materials Showing Adiabatic Shear Bands | 335 |
E | Specification of Selected Materials Showing Adiabatic Shear Bands | 341 |
F | Conversion Factors | 357 |
References | 358 | |
Author Index | 369 | |
Subject Index | 375 |
Resumo:
13 p.
Resumo:
An examination is made of the socio-economic factors associated with the failure of existing approaches to the fishing input requirements of small-scale fisheries in Nigeria. The fishermen and secretaries of the fishermen cooperative societies in three major settlements (Uta-Ewa, Okoroete and Iko) were selected for interviews. The survey showed that the idealogy of the fishermen of the role of cooperative society is wrong and specific programmes need to be directed towards correcting this perception. Thus, for any meaningful support programme for the artisanal small-scale fishermen, the perception of the fishermen about the cooperative organization must first be aligned rightly. It is suggested that the fishing input be determined by type and specification as a preliminary step in the delivery of inputs to the fishermen. Social, economic and cultural variabilities should be related to the requirement by the fishermen. The price level of fishermen will determine the direction and level of government support required
Resumo:
El presente proyecto consiste en el análisis y búsqueda de soluciones para el control de producción de la unidad de rodajes de la compañía CAF S.A. Para ello, se ha tenido que analizar procesos de producción, capturar requerimientos, desarrollar unas herramientas de control de producción temporales y elaborar una especificación de requisitos. Sin olvidar la gestión e interlocución con proveedores. Estas líneas de trabajo se encuentran descritas en esta memoria, junto con análisis de resultados, conclusiones y unas líneas futuras donde se seguirá trabajando.
Resumo:
The Drosophila compound eye has provided a genetic approach to understanding the specification of cell fates during differentiation. The eye is made up of some 750 repeated units or ommatidia, arranged in a lattice. The cellular composition of each ommatidium is identical. The arrangement of the lattice and the specification of cell fates in each ommatidium are thought to occur in development through cellular interactions with the local environment. Many mutations have been studied that disrupt the proper patterning and cell fating in the eye. The eyes absent (eya) mutation, the subject of this thesis, was chosen because of its eyeless phenotype. In eya mutants, eye progenitor cells undergo programmed cell death before the onset of patterning has occurred. The molecular genetic analysis of the gene is presented.
The eye arises from the larval eye-antennal imaginal disc. During the third larval instar, a wave of differentiation progresses across the disc, marked by a furrow. Anterior to the furrow, proliferating cells are found in apparent disarray. Posterior to the furrow, clusters of differentiating cells can be discerned, that correspond to the ommatidia of the adult eye. Analysis of an allelic series of eya mutants in comparison to wild type revealed the presence of a selection point: a wave of programmed cell death that normally precedes the furrow. In eya mutants, an excessive number of eye progenitor cells die at this selection point, suggesting the eya gene influences the distribution of cells between fates of death and differentiation.
In addition to its role in the eye, the eya gene has an embryonic function. The eye function is autonomous to the eye progenitor cells. Molecular maps of the eye and embryonic phenotypes are different. Therefore, the function of eya in the eye can be treated independently of the embryonic function. Cloning of the gene reveals two cDNA's that are identical except for the use of an alternatively-spliced 5' exon. The predicted protein products differ only at the N-termini. Sequence analysis shows these two proteins to be the first of their kind to be isolated. Trangenic studies using the two cDNA's show that either gene product is able to rescue the eye phenotype of eya mutants.
The eya gene exhibits interallelic complementation. This interaction is an example of an "allelic position effect": an interaction that depends on the relative position in the genome of the two alleles, which is thought to be mediated by chromosomal pairing. The interaction at eya is essentially identical to a phenomenon known as transvection, which is an allelic position effect that is sensitive to certain kinds of chromosomal rearrangements. A current model for the mechanism of transvection is the trans action of gene regulatory regions. The eya locus is particularly well suited for the study of transvection because the mutant phenotypes can be quantified by scoring the size of the eye.
The molecular genetic analysis of eya provides a system for uncovering mechanisms underlying differentiation, developmentally regulated programmed cell death, and gene regulation.
Resumo:
In order to identify new molecules that might play a role in regional specification of the nervous system, we generated and characterized monoclonal antibodies (mAbs) that have positionally-restricted labeling patterns.
The FORSE-1 mAb was generated using a strategy designed to produce mAbs against neuronal cell surface antigens that might be regulated by regionally-restricted transcription factors in the developing central nervous system (CNS). FORSE-1 staining is enriched in the forebrain as compared to the rest of the CNS until E18. Between E11.5-E13.5, only certain areas of the forebrain are labeled. There is also a dorsoventrally-restricted region of labeling in the hindbrain and spinal cord. The mAb labels a large proteoglycan-like cell-surface antigen (>200 kD). The labeling pattern of FORSE-1 is conserved in various mammals and in chick.
To determine whether the FORSE-1 labeling pattern is similar to that of known transcription factors, the expression of BF-1 and Dlx-2 was compared with FORSE-1. There is a striking overlap between BF-1 and FORSE-1 in the telencephalon. In contrast, FORSE-1 and Dlx-2 have very different patterns of expression in the forebrain, suggesting that regulation by Dlx-2 alone cannot explain the distribution of FORSE-1. They do, however, share some sharp boundaries in the diencephalon. In addition, FORSE-1 identifies some previously unknown boundaries in the developing forebrain. Thus, FORSE-1 is a new cell surface marker that can be used to subdivide the embryonic forebrain into regions smaller than previously described, providing further complexity necessary for developmental patterning.
I also studied the expression of the cell surface protein CD9 in the developing and adult rat nervous system. CD9 is implicated in intercellular signaling and cell adhesion in the hematopoetic system. In the nervous system, CD9 may perform similar functions in early sympathetic ganglia, chromaffin cells, and motor neurons, all of which express the protein. The presence of CD9 on the surfaces of Schwann cells and axons at the appropriate time may allow the protein to participate in the cellular interactions involved in myelination.
Resumo:
In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.
In this thesis we address the above issues by developing a new methodology, called "analysis-aware" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.
Resumo:
Notch signaling acts in many diverse developmental spatial patterning processes. To better understand why this particular pathway is employed where it is and how downstream feedbacks interact with the signaling system to drive patterning, we have pursued three aims: (i) to quantitatively measure the Notch system's signal input/output (I/O) relationship in cell culture, (ii) to use the quantitative I/O relationship to computationally predict patterning outcomes of downstream feedbacks, and (iii) to reconstitute a Notch-mediated lateral induction feedback (in which Notch signaling upregulates the expression of Delta) in cell culture. The quantitative Notch I/O relationship revealed that in addition to the trans-activation between Notch and Delta on neighboring cells there is also a strong, mutual cis-inactivation between Notch and Delta on the same cell. This feature tends to amplify small differences between cells. Incorporating our improved understanding of the signaling system into simulations of different types of downstream feedbacks and boundary conditions lent us several insights into their function. The Notch system converts a shallow gradient of Delta expression into a sharp band of Notch signaling without any sort of feedback at all, in a system motivated by the Drosophila wing vein. It also improves the robustness of lateral inhibition patterning, where signal downregulates ligand expression, by removing the requirement for explicit cooperativity in the feedback and permitting an exceptionally simple mechanism for the pattern. When coupled to a downstream lateral induction feedback, the Notch system supports the propagation of a signaling front across a tissue to convert a large area from one state to another with only a local source of initial stimulation. It is also capable of converting a slowly-varying gradient in parameters into a sharp delineation between high- and low-ligand populations of cells, a pattern reminiscent of smooth muscle specification around artery walls. Finally, by implementing a version of the lateral induction feedback architecture modified with the addition of an autoregulatory positive feedback loop, we were able to generate cells that produce enough cis ligand when stimulated by trans ligand to themselves transmit signal to neighboring cells, which is the hallmark of lateral induction.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
As evolution progresses, developmental changes occur. Genes lose and gain molecular partners, regulatory sequences, and new functions. As a consequence, tissues evolve alternative methods to develop similar structures, more or less robust. How this occurs is a major question in biology. One method of addressing this question is by examining the developmental and genetic differences between similar species. Several studies of nematodes Pristionchus pacificus and Oscheius CEW1 have revealed various differences in vulval development from the well-studied C. elegans (e.g. gonad induction, competence group specification, and gene function.)
I approached the question of developmental change in a similar manner by using Caenorhabditis briggsae, a close relative of C. elegans. C. briggsae allows the use of transgenic approaches to determine developmental changes between species. We determined subtle changes in the competence group, in 1° cell specification, and vulval lineage.
We also analyzed the let-60 gene in four nematode species. We found conservation in the codon identity and exon-intron boundaries, but lack of an extended 3' untranslated region in Caenorhabditis briggsae.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
25 p.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
Os principais constituintes do ar, nitrogênio, oxigênio e argônio, estão cada vez mais presentes nas indústrias, onde são empregados nos processos químicos, para o transporte de alimentos e processamento de resíduos. As duas principais tecnologias para a separação dos componentes do ar são a adsorção e a destilação criogênica. Entretanto, para ambos os processos é necessário que os contaminantes do ar, como o gás carbônico, o vapor dágua e hidrocarbonetos, sejam removidos para evitar problemas operacionais e de segurança. Desta forma, o presente trabalho trata do estudo do processo de pré-purificação de ar utilizando adsorção. Neste sistema a corrente de ar flui alternadamente entre dois leitos adsorvedores para produzir ar purificado continuamente. Mais especificamente, o foco da dissertação corresponde à investigação do comportamento de unidades de pré-purificação tipo PSA (pressure swing adsorption), onde a etapa de dessorção é realizada pela redução da pressão. A análise da unidade de pré-purificação parte da modelagem dos leitos de adsorção através de um sistema de equações diferenciais parciais de balanço de massa na corrente gasosa e no leito. Neste modelo, a relação de equilíbrio relativa à adsorção é descrita pela isoterma de Dubinin-Astakhov estendida para misturas multicomponentes. Para a simulação do modelo, as derivadas espaciais são discretizadas via diferenças finitas e o sistema de equações diferenciais ordinárias resultante é resolvido por um solver apropriado (método das linhas). Para a simulação da unidade em operação, este modelo é acoplado a um algoritmo de convergência relativo às quatro etapas do ciclo de operação: adsorção, despressurização, purga e dessorção. O algoritmo em questão deve garantir que as condições finais da última etapa são equivalentes às condições iniciais da primeira etapa (estado estacionário cíclico). Desta forma, a simulação foi implementada na forma de um código computacional baseado no ambiente de programação Scilab (Scilab 5.3.0, 2010), que é um programa de distribuição gratuita. Os algoritmos de simulação de cada etapa individual e do ciclo completo são finalmente utilizados para analisar o comportamento da unidade de pré-purificação, verificando como o seu desempenho é afetado por alterações nas variáveis de projeto ou operacionais. Por exemplo, foi investigado o sistema de carregamento do leito que mostrou que a configuração ideal do leito é de 50% de alumina seguido de 50% de zeólita. Variáveis do processo foram também analisadas, a pressão de adsorção, a vazão de alimentação e o tempo do ciclo de adsorção, mostrando que o aumento da vazão de alimentação leva a perda da especificação que pode ser retomada reduzindo-se o tempo do ciclo de adsorção. Mostrou-se também que uma pressão de adsorção maior leva a uma maior remoção de contaminantes.
Resumo:
A contaminação do ar e do meio ambiente por combustíveis derivados de petróleo tem sido objeto de crescente pesquisa no Brasil. Dentre os tipos de poluição ao meio ambiente, a poluição atmosférica é a que causa mais incômodo à população. Esta exerce efeitos sobre a saúde humana, causando desde simples irritações até câncer de pulmão. Dos poluentes mais perigosos encontrados nesses ambientes, são destaques os hidrocarbonetos e os compostos monoaromáticos como o benzeno, tolueno e xilenos (BTX), presentes nesses combustíveis, que são extremamente tóxicos à saúde humana. Para controle desses compostos orgânicos voláteis, é necessário quantificá-los e compará-los aos valores limites de tolerância legislados no Brasil. A utilização da técnica de cromatografia gasosa e da técnica de espectroscopia por infravermelho permite realizar essa tarefa de maneira relativamente simples e rápida. Portanto o objetivo deste trabalho foi apresentar a composição química de amostras de gasolina do tipo C comercializadas nos postos revendedores da Região Metropolitana do Estado do Rio de Janeiro. Foram feitas análises quantitativas dos principais grupos químicos (parafínicos, olefínicos, naftênicos e aromáticos) e etanol por Cromatografia Gasosa de Alta Resolução e da composição de benzeno, tolueno e xileno (BTX) pela técnica de absorção no infravermelho. Os resultados foram comparados com os limites dados pela especificação da ANP (Portaria n 309) para a qualidade da gasolina, com o objetivo de se verificar se estão em conformidade com este agente regulamentador. Os resultados mostraram que todos os teores encontrados de olefínicos e aromáticos foram abaixo do limite especificado. Alguns postos apresentaram os teores de benzeno acima do limite especificado, indicando um nível de ação por parte da ANP principalmente pela ação tóxica do benzeno