964 resultados para Logical necessity
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this paper, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets. This is a surprising result since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier.
Resumo:
Analysis of climate change impacts on streamflow by perturbing the climate inputs has been a concern for many authors in the past few years, but there are few analyses for the impacts on water quality. To examine the impact of change in climate variables on the water quality parameters, the water quality input variables have to be perturbed. The primary input variables that can be considered for such an analysis are streamflow and water temperature, which are affected by changes in precipitation and air temperature, respectively. Using hypothetical scenarios to represent both greenhouse warming and streamflow changes, the sensitivity of the water quality parameters has been evaluated under conditions of altered river flow and river temperature in this article. Historical data analysis of hydroclimatic variables is carried out, which includes flow duration exceedance percentage (e.g. Q90), single low- flow indices (e.g. 7Q10, 30Q10) and relationships between climatic variables and surface variables. For the study region of Tunga-Bhadra river in India, low flows are found to be decreasing and water temperatures are found to be increasing. As a result, there is a reduction in dissolved oxygen (DO) levels found in recent years. Water quality responses of six hypothetical climate change scenarios were simulated by the water quality model, QUAL2K. A simple linear regression relation between air and water temperature is used to generate the scenarios for river water temperature. The results suggest that all the hypothetical climate change scenarios would cause impairment in water quality. It was found that there is a significant decrease in DO levels due to the impact of climate change on temperature and flows, even when the discharges were at safe permissible levels set by pollution control agencies (PCAs). The necessity to improve the standards of PCA and develop adaptation policies for the dischargers to account for climate change is examined through a fuzzy waste load allocation model developed earlier. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
To investigate the use of centre of gravity location on reducing cyclic pitch control for helicopter UAV's (unmanned air vehicles) and MAV's (micro air vehicles). Low cyclic pitch is a necessity to implement the swashplateless rotor concept using trailing edge flaps or active twist using current generation low authority piezoceramic actuators. Design/methodology/approach – An aeroelastic analysis of the helicopter rotor with elastic blades is used to perform parametric and sensitivity studies of the effects of longitudinal and lateral center of gravity (cg) movements on the main rotor cyclic pitch. An optimization approach is then used to find cg locations which reduce the cyclic pitch at a given forward speed. Findings – It is found that the longitudinal cyclic pitch and lateral cyclic pitch can be driven to zero at a given forward speed by shifting the cg forward and to the port side, respectively. There also exist pairs of numbers for the longitudinal and lateral cg locations which drive both the cyclic pitch components to zero at a given forward speed. Based on these results, a compromise optimal cg location is obtained such that the cyclic pitch is bounded within ±5° for a BO105 helicopter rotor. Originality/value – The reduction in the cyclic pitch due to helicopter cg location is found to significantly reduce the maximum magnitudes of the control angles in flight, facilitating the swashplateless rotor concept. In addition, the existence of cg locations which drive the cyclic pitches to zero allows for the use of active cg movement as a way to replace the cyclic pitch control for helicopter MAV's.
Resumo:
Superscalar processors currently have the potential to fetch multiple basic blocks per cycle by employing one of several recently proposed instruction fetch mechanisms. However, this increased fetch bandwidth cannot be exploited unless pipeline stages further downstream correspondingly improve. In particular,register renaming a large number of instructions per cycle is diDcult. A large instruction window, needed to receive multiple basic blocks per cycle, will slow down dependence resolution and instruction issue. This paper addresses these and related issues by proposing (i) partitioning of the instruction window into multiple blocks, each holding a dynamic code sequence; (ii) logical partitioning of the registerjle into a global file and several local jles, the latter holding registers local to a dynamic code sequence; (iii) the dynamic recording and reuse of register renaming information for registers local to a dynamic code sequence. Performance studies show these mechanisms improve performance over traditional superscalar processors by factors ranging from 1.5 to a little over 3 for the SPEC Integer programs. Next, it is observed that several of the loops in the benchmarks display vector-like behavior during execution, even if the static loop bodies are likely complex for compile-time vectorization. A dynamic loop vectorization mechanism that builds on top of the above mechanisms is briefly outlined. The mechanism vectorizes up to 60% of the dynamic instructions for some programs, albeit the average number of iterations per loop is quite small.
Resumo:
A new scheme for robust estimation of the partial state of linear time-invariant multivariable systems is presented, and it is shown how this may be used for the detection of sensor faults in such systems. We consider an observer to be robust if it generates a faithful estimate of the plant state in the face of modelling uncertainty or plant perturbations. Using the Stable Factorization approach we formulate the problem of optimal robust observer design by minimizing an appropriate norm on the estimation error. A logical candidate is the 2-norm, corresponding to an H�¿ optimization problem, for which solutions are readily available. In the special case of a stable plant, the optimal fault diagnosis scheme reduces to an internal model control architecture.
Resumo:
Process control rules may be specified using decision tables. Such a specification is superior when logical decisions to be taken in control dominate. In this paper we give a method of detecting redundancies, incompleteness, and contradictions in such specifications. Using such a technique thus ensures the validity of the specifications.
Resumo:
Understanding the volume change behaviour of expansive soils/clays becomes a dire necessity to obtain engineering solutions to structures founded on these soils. Behaviour of expansive soils does not conform to the natural behaviour of fine grained soils. Most of the cases, the permissible heave/settlement forms the design criteria. The paper discusses the basic properties, the role of effective stress concept, basic mechanism in controlling the volume change behaviour, the role of double layer repulsion and its validity and certain basic considerations of footing resting on an expansive soil with respect to heave or settlement and the soil reinforcement as a possible engineering solution.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.
Resumo:
Design of the required tool is a key and important parameter in the technique of friction stir welding (FSW). This is so because tool design does exert a close control over the quality of the weld. In an attempt to optimize tool design and its selection, it is essential and desirable to understand the mechanisms governing the formation of the weld. In this research study, few experiments were conducted to systematically analyze the intrinsic mechanisms governing the formation of the weld and to effectively utilize the analysis to establish a logical basis for design of the tool. For this purpose, the experiments were conducted using different geometries of the shoulder and pin of the rotating tool in such a way that only tool geometry had an intrinsic influence on formation of the weld. The results revealed that for a particular diameter of the pin there is an optimum diameter of the shoulder. Below this optimum shoulder diameter, the weld does not form while above the optimum diameter the overall symmetry of the weld is lost. Based on experimental results, a mechanism for the formation of friction stir weld is proposed. A synergism of the experimental results with the proposed mechanism is helpful in establishing the set of welding parameters for a given material.
Resumo:
There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.
Guided Wave based Damage Detection in a Composite T-joint using 3D Scanning Laser Doppler Vibrometer
Resumo:
Composite T-joints are commonly used in modern composite airframe, pressure vessels and piping structures, mainly to increase the bending strength of the joint and prevents buckling of plates and shells, and in multi-cell thin-walled structures. Here we report a detailed study on the propagation of guided ultrasonic wave modes in a composite T-joint and their interactions with delamination in the co-cured co-bonded flange. A well designed guiding path is employed wherein the waves undergo a two step mode conversion process, one is due to the web and joint filler on the back face of the flange and the other is due to the delamination edges close to underneath the accessible surface of the flange. A 3D Laser Doppler Vibrometer is used to obtain the three components of surface displacements/velocities of the accessible face of the flange of the T-joint. The waves are launched by a piezo ceramic wafer bonded on to the back surface of the flange. What is novel in the proposed method is that the location of any change in material/geometric properties can be traced by computing a frequency domain power flow along a scan line. The scan line can be chosen over a grid either during scan or during post-processing of the scan data off-line. The proposed technique eliminates the necessity of baseline data and disassembly of structure for structural interrogation.
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this article, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics that suggests such a restricted use of the freeze operator. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets by proposing a symbolic representation of models. This is a quite surprising result in view of the expressive power of the logic since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier. The article also contains developments about the relationships between temporal logics with the freeze operator and counter automata as well as reductions into first-order logics over data words.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Phi) at the nodal points of the mesh. The experimentally measured flux (U-measured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Phi) from U-measured(cal). In the first approach, the measurement data with a homogeneous phantom (U-measured(homo)) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (U-measured(hetero)) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) DOI: 10.1117/1.JBO.18.2.026023]
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]