971 resultados para Harder
Resumo:
The isotopic composition of hydrogen and helium in solar cosmic rays provides a means of studying solar flare particle acceleration mechanisms since the enhanced relative abundance of rare isotopes, such as 2H, 3H and 3He, is due to their production by inelastic nuclear collisions in the solar atmosphere during the flare. In this work the Caltech Electron/Isotope Spectrometer on the IMP-7 spacecraft has been used to measure this isotopic composition. The response of the dE/dx-E particle telescope is discussed and alpha particle channeling in thin detectors is identified as an important background source affecting measurement of low values of (3He/4He).
The following flare-averaged results are obtained for the period, October, 1972 - November, 1973: (2H/1H) = 7+10-6 X 10-6 (1.6 - 8.6 MeV/nuc), (3H/1H) less than 3.4 x 10-6 (1.2 - 6.8 MeV/nuc), (3He/4He) = (9 ± 4) x 10-3, (3He/1H) = (1.7 ± 0.7) x 10-4 (3.1 - 15.0 MeV/nuc). The deuterium and tritium ratios are significantly lower than the same ratios at higher energies, suggesting that the deuterium and tritium spectra are harder than that of the protons. They are, however, consistent with the same thin target model relativistic path length of ~ 1 g/cm2 (or equivalently ~ 0.3 g/cm2 at 30 MeV/nuc) which is implied by the higher energy results. The 3He results, consistent with previous observations, would imply a path length at least 3 times as long, but the observations may be contaminated by small 3He rich solar events.
During 1973 three "3He rich events," containing much more 3He than 2H or 3H were observed on 14 February, 29 June and 5 September. Although the total production cross sections for 2H,3H and 3He are comparable, an upper limit to (2H/3He) and (3H/3He) was 0.053 (2.9-6.8 MeV/nuc), summing over the three events. This upper limit is marginally consistent with Ramaty and Kozlovsky's thick target model which accounts for such events by the nuclear reaction kinematics and directional properties of the flare acceleration process. The 5 September event was particularly significant in that much more 3He was observed than 4He and the fluxes of 3He and 1H were about equal. The range of (3He/4He) for such events reported to date is 0.2 to ~ 6 while (3He/1H) extends from 10-3 to ~ 1. The role of backscattered and mirroring protons and alphas in accounting for such variations is discussed.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
O escopo deste trabalho é investigar a natureza e as funções dos ônus de argumentação em suas relações com o sistema jurídico e com a argumentação jurídica. O pano de fundo para o desenvolvimento dessas análises é o triplo condicionamento do direito. De acordo com essa visão, o direito e a argumentação jurídica são condicionados extrínseca, intrínseca e institucionalmente. Nesse cenário, defende-se, por um lado, que os ônus argumentativos são componentes necessários de um sistema jurídico que compreende regras e princípios. Analisados estruturalmente, os ônus argumentativos são compreendidos, por outro lado, como efeitos de regras e standards que consolidam relações de prioridade normativas. A partir dessas relações, defende-se que ônus de argumentação são mecanismos de redução e controle da incerteza que caracteriza necessariamente a subidealidade do sistema jurídico ao (i) facilitarem a manutenção das relações de prioridade que os sustentam na solução de casos concretos, (ii) dificultarem a inversão dessas relações e (iii) instituírem pontos de parada na argumentação jurídica em situações nas quais o desenvolvimento de cadeias argumentativas não é capaz de garantir se, em determinado caso concreto, certa relação de prioridade deve ser mantida ou invertida.
Resumo:
Almost all material selection problems require that a compromise be sought between some metric of performance and cost. Trade-off methods using utility functions allow optimal solutions to be found for two objective, but for three it is harder. This paper develops and demonstrates a method for dealing with three objectives.
Resumo:
A evolução nos sistemas digitais de comunicação está intrinsicamente relacionada ao desenvolvimento da tecnologia de fibras ópticas. Desde a sua criação, na década de 60, inúmeras pesquisas vem sendo realizadas com o intuito de aumentar a capacidade de informação transmitida, por meio da redução da atenuação, controle da dispersão cromática e eliminação das não-linearidades. Neste contexto, as Fibras de Bragg surgem como uma estrutura de grande potencialidade para se minimizar tais inconvenientes. As fibras de Bragg possuem um mecanismo de operação diferente em relação às fibras tradicionais de suportar os modos confinados. Nelas, o núcleo possui um baixo índice de refração, e a casca é constituída por anéis dielétricos de diferentes índices de refração, alocados alternadamente. Para uma fibra de Bragg com núcleo oco, como a considerada neste trabalho, há perdas decorrentes dos modos de fuga. Portanto, a análise da dispersão destas estruturas se situa no plano complexo, tornando-a muito difícil. Esta dissertação será fundamentada em uma estratégia imprescindível à análise dos modos transversais TE0m, TM0m e dos híbridos. Os resultados encontrados são validados confrontando-os com os obtidos na literatura. O trabalho discutirá as perdas e dispersões dos modos citados, e os resultados obtidos poderão nortear as pesquisas das fibras de Bragg.
Resumo:
A computer can assist the process of design by analogy by recording past designs. The experience these represent could be much wider than that of designers using the system, who therefore need to identify potential cases of interest. If the computer assists with this lookup, the designers can concentrate on the more interesting aspect of extracting and using the ideas which are found. However, as the knowledge base grows it becomes ever harder to find relevant cases using a keyword indexing scheme without knowing precisely what to look for. Therefore a more flexible searching system is needed.
If a similarity measure can be defined for the features of the designs, then it is possible to match and cluster them. Using a simple measure like co-occurrence of features within a particular case would allow this to happen without human intervention, which is tedious and time- consuming. Any knowledge that is acquired about how features are related to each other will be very shallow: it is not intended as a cognitive model for how humans understand, learn, or retrieve information, but more an attempt to make effective, efficient use of the information available. The question remains of whether such shallow knowledge is sufficient for the task.
A system to retrieve information from a large database is described. It uses co-occurrences to relate keywords to each other, and then extends search queries with similar words. This seems to make relevant material more accessible, providing hope that this retrieval technique can be applied to a broader knowledge base.
Resumo:
Information theoretic active learning has been widely studied for probabilistic models. For simple regression an optimal myopic policy is easily tractable. However, for other tasks and with more complex models, such as classification with nonparametric models, the optimal solution is harder to compute. Current approaches make approximations to achieve tractability. We propose an approach that expresses information gain in terms of predictive entropies, and apply this method to the Gaussian Process Classifier (GPC). Our approach makes minimal approximations to the full information theoretic objective. Our experimental performance compares favourably to many popular active learning algorithms, and has equal or lower computational complexity. We compare well to decision theoretic approaches also, which are privy to more information and require much more computational time. Secondly, by developing further a reformulation of binary preference learning to a classification problem, we extend our algorithm to Gaussian Process preference learning.
Resumo:
Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior) and of the error (the loss function). The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality) seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed) distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.
Resumo:
This scoping study proposes using mixed nitride fuel in Pu-based high conversion LWR designs in order to increase the breeding ratio. The higher density fuel reduces the hydrogen-to-heavy metal ratio in the reactor which results in a harder spectrum in which breeding is more effective. A Resource-renewable Boiling Water Reactor (RBWR) assembly was modeled in MCNP to demonstrate this effect in a typical high conversion LWR design. It was determined that changing the fuel from (U,TRU)O2 to (U,TRU)N in the assembly can increase its fissile inventory ratio (fissile Pu mass divided by initial fissile Pu mass) from 1.04 to up to 1.17. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Multiple recycle of long-lived actinides has the potential to greatly reduce the required storage time for spent nuclear fuel or high level nuclear waste. This is generally thought to require fast reactors as most transuranic (TRU) isotopes have low fission probabilities in thermal reactors. Reduced-moderation LWRs are a potential alternative to fast reactors with reduced time to deployment as they are based on commercially mature LWR technology. Thorium (Th) fuel is neutronically advantageous for TRU multiple recycle in LWRs due to a large improvement in the void coefficient. If Th fuel is used in reduced-moderation LWRs, it appears neutronically feasible to achieve full actinide recycle while burning an external supply of TRU, with related potential improvements in waste management and fuel utilization. In this paper, the fuel cycle of TRU-bearing Th fuel is analysed for reduced-moderation PWRs and BWRs (RMPWRs and RBWRs). RMPWRs have the advantage of relatively rapid implementation and intrinsically low conversion ratios, which is desirable to maximize the TRU burning rate. However, it is challenging to simultaneously satisfy operational and fuel cycle constraints. An RBWR may potentially take longer to implement than an RMPWR due to more extensive changes from current BWR technology. However, the harder neutron spectrum can lead to favourable fuel cycle performance. A two-stage TRU burning cycle, where the first stage is Th-Pu MOX in a conventional PWR feeding a second stage continuous burn in RMPWR or RBWR, is technically reasonable, although it is more suitable for the RBWR implementation. In this case, the fuel cycle performance is relatively insensitive to the discharge burn-up of the first stage. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
By employing first-principle total-energy calculations, a systematic study of the dopability of ZnS to be both n- and p-types compared with that of ZnO is carried out. We find that all the attempted acceptor dopants, group V substituting on the S lattice site and group I and IB on the Zn sites in ZnS, have lower ionization energies than the corresponding ones in ZnO. This can be accounted for by the fact that ZnS has relative higher valence band maximum than ZnO. Native ZnS is weak p-type under S-rich condition, as the abundant acceptor V-Zn has rather large ionization energy. Self-compensations by the formation of interstitial donors in group I and IB-doped p-type ZnS can be avoided when sample is prepared under S-rich condition. In terms of ionization energies, Li-Zn and N-S are the preferred acceptors in ZnS. Native n- type doping of ZnS is limited by the spontaneous formation of intrinsic V-Zn(2-); high efficient n-type doping with dopants is harder to achieve than in ZnO because of the readiness of forming native compensating centers and higher ionization energy of donors in ZnS. (C) 2009 American Institute of Physics. [DOI 10.1063/1.3103585]
Resumo:
Silicon-on-insulator (SOI) technologies have been developed for radiation-hardened military and space applications. The use of SOI has been motivated by the full dielectric isolation of individual transistors, which prevents latch-up. The sensitive region for charge collection in SOI technologies is much smaller than for bulk-silicon devices potentially making SOI devices much harder to single event upset (SEU). In this study, 64 kB SOI SRAMs were exposed to different heavy ions, such as Cu, Br, I, Kr. Experimental results show that the heavy ion SEU threshold linear energy transfer (LET) in the 64 kB SOI SRAMs is about 71.8 MeV cm(2)/mg. Accorded to the experimental results, the single event upset rate (SEUR) in space orbits were calculated and they are at the order of 10(-13) upset/(day bit).
Resumo:
We investigate the cohesive energy, heat of formation, elastic constant and electronic band structure of transition metal diborides TMB2 (TM = Hf, Ta, W, Re, Os and Ir, Pt) in the Pmmn space group using the ab initio pseudopotential total energy method. Our calculations indicate that there is a relationship between elastic constant and valence electron concentration (VEC): the bulk modulus and shear modulus achieve their maximum when the VEC is in the range of 6.8-7.2. In addition, trends in the elastic constant are well explained in terms of electronic band structure analysis, e.g., occupation of valence electrons in states near the Fermi level, which determines the cohesive energy and elastic properties. The maximum in bulk modulus and shear modulus is attributed to the nearly complete filling of TM d-B p bonding states without filling the antibonding states. On the basis of the observed relationship, we predict that alloying W and Re in the orthorhombic structure OsB2 might be harder than alloying the Ir element. Indeed, the further calculations confirmed this expectation.
Resumo:
针对挖掘机工作装置的结构特点 ,建立了其系统 3维有限元模型 ,通过分析得出了系统的固有频率和振型 ,以及结构动态响应各时刻的变形和应力分布规律。比较了工作装置在开挖难度较小和较大 2种情况下 ,挖掘阻力对系统动力响应的影响