937 resultados para Voluntary standard systems
Resumo:
Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences these parameters. Hence the embedded system designer performs a complete memory architecture exploration. This problem is a multi-objective optimization problem and can be tackled as a two-level optimization problem. The outer level explores various memory architecture while the inner level explores placement of data sections (data layout problem) to minimize memory stalls. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of Multi-objective Genetic Algorithm (Memory Architecture exploration) and an efficient heuristic data placement algorithm. At the outer level the memory architecture exploration is done by picking memory modules directly from a ASIC memory Library. This helps in performing the memory architecture exploration in a integrated framework, where the memory allocation, memory exploration and data layout works in a tightly coupled way to yield optimal design points with respect to area, power and performance. We experimented our approach for 3 embedded applications and our approach explores several thousand memory architecture for each application, yielding a few hundred optimal design points in a few hours of computation time on a standard desktop.
Resumo:
The standard Gibbs energies of formation of platinum-rich intermetallic compounds in the systems Pt-Mg, Pt-Ca, and Pt-Ba have been measured in the temperature range of 950 to 1200 K using solid-state galvanic cells based on MgF2, CaF2, and BaF2 as solid electrolytes. The results are summarized by the following equations: ΔG° (MgPt7) = −256,100 + 16.5T (±2000) J/mol ΔG° (MgPt3) = −217,400 + 10.7T (±2000) J/mol ΔG° (CaPt5) = −297,500 + 13.0T (±5000) J/mol ΔG° (Ca2Pt7) = −551,800 + 22.3T (±5000) J/mol ΔG° (CaPt2) = −245,400 + 9.3T (±5000) J/mol ΔG° (BaPt5) = −238,700 + 8.1T (±4000) J/mol ΔG° (BaPt2) = −197,300 + 4.0T (±4000) J/mol where solid platinum and liquid alkaline earth metals are selected as the standard states. The relatively large error estimates reflect the uncertainties in the auxiliary thermodynamic data used in the calculation. Because of the strong interaction between platinum and alkaline earth metals, it is possible to reduce oxides of Group ILA metals by hydrogen at high temperature in the presence of platinum. The alkaline earth metals can be recovered from the resulting intermetallic compounds by distillation, regenerating platinum for recycling. The platinum-slag-gas equilibration technique for the study of the activities of FeO, MnO, or Cr2O3 in slags containing MgO, CaO, or BaO is feasible provided oxygen partial pressure in the gas is maintained above that corresponding to the coexistence of Fe and “FeO.”
Resumo:
This paper deals with the characterisation of tar from two configurations of bioresidue thermochemical conversion reactors designed for producer gas based power generation systems. The pulverised fuel reactor is a cyclone system (R1) and the solid bioresidue reactor (denoted R2) is an open top twin air entry system both at 75-90 kg/h capacity (to generate electricity similar to 100 kVA). The reactor, R2, has undergone rigorous test in a major Indo-Swiss programme for the tar quantity at various conditions. The former is a recent technology development. Tars collected from these systems by a standard tar collection apparatus at the laboratory at Indian Institute of Science have been analysed at the Royal Institute of Technology (KTH), Sweden. The results of these analyses show that these thermochemical conversion reactors behave differently from the earlier reactors reported in literature in so far as tar generation is concerned. The extent of tar in hot gas is about 700-800 ppm for R1 and 70-100 ppm for R2. The amounts of the major compounds - naphthalene and phenol-are much lower that what is generally understood to happen in the gasifiers in Europe. It is suggested that the longer residence times at high temperatures allowed for in these reactors is responsible for this behavior. It is concluded the new generation reactor concepts extensively tried out at lower power levels hold promise for high power atmospheric gasification systems for woody as well as pulverisable bioresidues.
Resumo:
The effect of structure height on the lightning striking distance is estimated using a lightning strike model that takes into account the effect of connecting leaders. According to the results, the lightning striking distance may differ significantly from the values assumed in the IEC standard for structure heights beyond 30m. However, for structure heights smaller than about 30m, the results show that the values assumed by IEC do not differ significantly from the predictions based on a lightning attachment model taking into account the effect of connecting leaders. However, since IEC assumes a smaller striking distance than the ones predicted by the adopted model one can conclude that the safety is not compromised in adhering to the IEC standard. Results obtained from the model are also compared with Collection Volume Method (CVM) and other commonly used lightning attachment models available in the literature. The results show that in the case of CVM the calculated attractive distances are much larger than the ones obtained using the physically based lightning attachment models. This indicates the possibility of compromising the lightning protection procedures when using CVM. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Counter systems are a well-known and powerful modeling notation for specifying infinite-state systems. In this paper we target the problem of checking liveness properties in counter systems. We propose two semi decision techniques towards this, both of which return a formula that encodes the set of reachable states of the system that satisfy a given liveness property. A novel aspect of our techniques is that they use reachability analysis techniques, which are well studied in the literature, as black boxes, and are hence able to compute precise answers on a much wider class of systems than previous approaches for the same problem. Secondly, they compute their results by iterative expansion or contraction, and hence permit an approximate solution to be obtained at any point. We state the formal properties of our techniques, and also provide experimental results using standard benchmarks to show the usefulness of our approaches. Finally, we sketch an extension of our liveness checking approach to check general CTL properties.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
The use of self-contained, low-maintenance sensor systems installed on commercial vessels is becoming an important monitoring and scientific tool in many regions around the world. These systems integrate data from meteorological and water quality sensors with GPS data into a data stream that is automatically transferred from ship to shore. To begin linking some of this developing expertise, the Alliance for Coastal Technologies (ACT) and the European Coastal and Ocean Observing Technology (ECOOT) organized a workshop on this topic in Southampton, United Kingdom, October 10-12, 2006. The participants included technology users, technology developers, and shipping representatives. They collaborated to identify sensors currently employed on integrated systems, users of this data, limitations associated with these systems, and ways to overcome these limitations. The group also identified additional technologies that could be employed on future systems and examined whether standard architectures and data protocols for integrated systems should be established. Participants at the workshop defined 17 different parameters currently being measured by integrated systems. They identified that diverse user groups utilize information from these systems from resource management agencies, such as the Environmental Protection Agency (EPA), to local tourism groups and educational organizations. Among the limitations identified were instrument compatibility and interoperability, data quality control and quality assurance, and sensor calibration andlor maintenance frequency. Standardization of these integrated systems was viewed to be both advantageous and disadvantageous; while participants believed that standardization could be beneficial on many levels, they also felt that users may be hesitant to purchase a suite of instruments from a single manufacturer; and that a "plug and play" system including sensors from multiple manufactures may be difficult to achieve. A priority recommendation and conclusion for the general integrated sensor system community was to provide vessel operators with real-time access to relevant data (e.g., ambient temperature and salinity to increase efficiency of water treatment systems and meteorological data for increased vessel safety and operating efficiency) for broader system value. Simplified data displays are also required for education and public outreach/awareness. Other key recommendations were to encourage the use of integrated sensor packages within observing systems such as 100s and EuroGOOS, identify additional customers of sensor system data, and publish results of previous work in peer-reviewed journals to increase agency and scientific awareness and confidence in the technology. Priority recommendations and conclusions for ACT entailed highlighting the value of integrated sensor systems for vessels of opportunity through articles in the popular press, and marine science. [PDF contains 28 pages]
Resumo:
Smart and mobile environments require seamless connections. However, due to the frequent process of ''discovery'' and disconnection of mobile devices while data interchange is happening, wireless connections are often interrupted. To minimize this drawback, a protocol that enables an easy and fast synchronization is crucial. Bearing this in mind, Bluetooth technology appears to be a suitable solution to carry on such connections due to the discovery and pairing capabilities it provides. Nonetheless, the time and energy spent when several devices are being discovered and used at the same time still needs to be managed properly. It is essential that this process of discovery takes as little time and energy as possible. In addition to this, it is believed that the performance of the communications is not constant when the transmission speeds and throughput increase, but this has not been proved formally. Therefore, the purpose of this project is twofold: Firstly, to design and build a framework-system capable of performing controlled Bluetooth device discovery, pairing and communications. Secondly, to analyze and test the scalability and performance of the \emph{classic} Bluetooth standard under different scenarios and with various sensors and devices using the framework developed. To achieve the first goal, a generic Bluetooth platform will be used to control the test conditions and to form a ubiquitous wireless system connected to an Android Smartphone. For the latter goal, various stress-tests will be carried on to measure the consumption rate of battery life as well as the quality of the communications between the devices involved.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
A technique for obtaining approximate periodic solutions to nonlinear ordinary differential equations is investigated. The approach is based on defining an equivalent differential equation whose exact periodic solution is known. Emphasis is placed on the mathematical justification of the approach. The relationship between the differential equation error and the solution error is investigated, and, under certain conditions, bounds are obtained on the latter. The technique employed is to consider the equation governing the exact solution error as a two point boundary value problem. Among other things, the analysis indicates that if an exact periodic solution to the original system exists, it is always possible to bound the error by selecting an appropriate equivalent system.
Three equivalence criteria for minimizing the differential equation error are compared, namely, minimum mean square error, minimum mean absolute value error, and minimum maximum absolute value error. The problem is analyzed by way of example, and it is concluded that, on the average, the minimum mean square error is the most appropriate criterion to use.
A comparison is made between the use of linear and cubic auxiliary systems for obtaining approximate solutions. In the examples considered, the cubic system provides noticeable improvement over the linear system in describing periodic response.
A comparison of the present approach to some of the more classical techniques is included. It is shown that certain of the standard approaches where a solution form is assumed can yield erroneous qualitative results.
Resumo:
The dispersion compensation effect of the chirped fiber grating (CFG) is analyzed theoretically, and analytic expressions are derived for composite second-order (CSO) distortion in analog modulated sub-carrier multiplexed (AM-SCM) cable television (CATV) systems with externally and directly modulated transmitters. Simulations are given for the two kinds of modulations and for standard single mode fiber and non-zero dispersion shift fiber (NZDSF) systems. The results show that CFG could be used as a dispersion compensator in directly modulated systems, but its dispersion coefficient should be adjusted much more precisely than the externally modulated system. The requirements for the NZDSF system could be loosened much. It is proposed that directly modulated source may be used as a transmitter in CATV systems combined with tunable CFG dispersion compensator being adjusted precisely, which may be more cost-effective than externally modulation technology. (c) 2006 Elsevier GmbH. All rights reserved.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.
Resumo:
I. The binding of the intercalating dye ethidium bromide to closed circular SV 40 DNA causes an unwinding of the duplex structure and a simultaneous and quantitatively equivalent unwinding of the superhelices. The buoyant densities and sedimentation velocities of both intact (I) and singly nicked (II) SV 40 DNAs were measured as a function of free dye concentration. The buoyant density data were used to determine the binding isotherms over a dye concentration range extending from 0 to 600 µg/m1 in 5.8 M CsCl. At high dye concentrations all of the binding sites in II, but not in I, are saturated. At free dye concentrations less than 5.4 µg/ml, I has a greater affinity for dye than II. At a critical amount of dye bound I and II have equal affinities, and at higher dye concentration I has a lower affinity than II. The number of superhelical turns, τ, present in I is calculated at each dye concentration using Fuller and Waring's (1964) estimate of the angle of duplex unwinding per intercalation. The results reveal that SV 40 DNA I contains about -13 superhelical turns in concentrated salt solutions.
The free energy of superhelix formation is calculated as a function of τ from a consideration of the effect of the superhelical turns upon the binding isotherm of ethidium bromide to SV 40 DNA I. The value of the free energy is about 100 kcal/mole DNA in the native molecule. The free energy estimates are used to calculate the pitch and radius of the superhelix as a function of the number of superhelical turns. The pitch and radius of the native I superhelix are 430 Å and 135 Å, respectively.
A buoyant density method for the isolation and detection of closed circular DNA is described. The method is based upon the reduced binding of the intercalating dye, ethidium bromide, by closed circular DNA. In an application of this method it is found that HeLa cells contain in addition to closed circular mitochondrial DNA of mean length 4.81 microns, a heterogeneous group of smaller DNA molecules which vary in size from 0.2 to 3.5 microns and a paucidisperse group of multiples of the mitochondrial length.
II. The general theory is presented for the sedimentation equilibrium of a macromolecule in a concentrated binary solvent in the presence of an additional reacting small molecule. Equations are derived for the calculation of the buoyant density of the complex and for the determination of the binding isotherm of the reagent to the macrospecies. The standard buoyant density, a thermodynamic function, is defined and the density gradients which characterize the four component system are derived. The theory is applied to the specific cases of the binding of ethidium bromide to SV 40 DNA and of the binding of mercury and silver to DNA.
Resumo:
131 p.