977 resultados para Output variables


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Owing to widespread applications, synthesis and characterization of silver nanoparticles is recently attracting considerable attention. Increasing environmental concerns over chemical synthesis routes have resulted in attempts to develop biomimetic approaches. One of them is synthesis using plant parts, which eliminates the elaborate process of maintaining the microbial culture and often found to be kinetically favourable than other bioprocesses. The present study deals with investigating the effect of process variables like reductant concentrations, reaction pH, mixing ratio of the reactants and interaction time on the morphology and size of silver nanoparticles synthesized using aqueous extract of Azadirachta indica (Neem) leaves. The formation of crystalline silver nanoparticles was confirmed using X-ray diffraction analysis. By means of UV spectroscopy, Scanning and Transmission Electron Microscopy techniques, it was observed that the morphology and size of the nanoparticles were strongly dependent on the process parameters. Within 4 h interaction period, nanoparticles below 20-nm-size with nearly spherical shape were produced. On increasing interaction time (ageing) to 66 days, both aggregation and shape anisotropy (ellipsoidal, polyhedral and capsular) of the particles increased. In alkaline pH range, the stability of cluster distribution increased with a declined tendency for aggregation of the particles. It can be inferred from the study that fine tuning the bioprocess parameters will enhance possibilities of desired nano-product tailor made for particular applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The output of a laser is a high frequency propagating electromagnetic field with superior coherence and brightness compared to that emitted by thermal sources. A multitude of different types of lasers exist, which also translates into large differences in the properties of their output. Moreover, the characteristics of the electromagnetic field emitted by a laser can be influenced from the outside, e.g., by injecting an external optical field or by optical feedback. In the case of free-running solitary class-B lasers, such as semiconductor and Nd:YVO4 solid-state lasers, the phase space is two-dimensional, the dynamical variables being the population inversion and the amplitude of the electromagnetic field. The two-dimensional structure of the phase space means that no complex dynamics can be found. If a class-B laser is perturbed from its steady state, then the steady state is restored after a short transient. However, as discussed in part (i) of this Thesis, the static properties of class-B lasers, as well as their artificially or noise induced dynamics around the steady state, can be experimentally studied in order to gain insight on laser behaviour, and to determine model parameters that are not known ab initio. In this Thesis particular attention is given to the linewidth enhancement factor, which describes the coupling between the gain and the refractive index in the active material. A highly desirable attribute of an oscillator is stability, both in frequency and amplitude. Nowadays, however, instabilities in coupled lasers have become an active area of research motivated not only by the interesting complex nonlinear dynamics but also by potential applications. In part (ii) of this Thesis the complex dynamics of unidirectionally coupled, i.e., optically injected, class-B lasers is investigated. An injected optical field increases the dimensionality of the phase space to three by turning the phase of the electromagnetic field into an important variable. This has a radical effect on laser behaviour, since very complex dynamics, including chaos, can be found in a nonlinear system with three degrees of freedom. The output of the injected laser can be controlled in experiments by varying the injection rate and the frequency of the injected light. In this Thesis the dynamics of unidirectionally coupled semiconductor and Nd:YVO4 solid-state lasers is studied numerically and experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a study of kinematic and force singularities in parallel manipulators and closed-loop mechanisms and their relationship to accessibility and controllability of such manipulators and closed-loop mechanisms, Parallel manipulators and closed-loop mechanisms are classified according to their degrees of freedom, number of output Cartesian variables used to describe their motion and the number of actuated joint inputs. The singularities in the workspace are obtained by considering the force transformation matrix which maps the forces and torques in joint space to output forces and torques ill Cartesian space. The regions in the workspace which violate the small time local controllability (STLC) and small time local accessibility (STLA) condition are obtained by deriving the equations of motion in terms of Cartesian variables and by using techniques from Lie algebra.We show that for fully actuated manipulators when the number ofactuated joint inputs is equal to the number of output Cartesian variables, and the force transformation matrix loses rank, the parallel manipulator does not meet the STLC requirement. For the case where the number of joint inputs is less than the number of output Cartesian variables, if the constraint forces and torques (represented by the Lagrange multipliers) become infinite, the force transformation matrix loses rank. Finally, we show that the singular and non-STLC regions in the workspace of a parallel manipulator and closed-loop mechanism can be reduced by adding redundant joint actuators and links. The results are illustrated with the help of numerical examples where we plot the singular and non-STLC/non-STLA regions of parallel manipulators and closed-loop mechanisms belonging to the above mentioned classes. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The specific objective of this paper is to develop multivariable controllers that would achieve asymptotic regulation in the presence of parameter variations and disturbance inputs for a tubular reactor used in ammonia synthesis. A ninth order state space model with three control inputs and two disturbance inputs is generated from the nonlinear distributed model using linearization and lumping approximations. Using this model, an approach for control system design is developed keeping in view the imperfections of the model and the measurability of the state variables. Specifically, the design of feedforward and robust integral controllers using state and output feedback is considered. Also, the design of robust multiloop proportional integral controllers is presented. Finally the performance of these controllers is evaluated through simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study of the stability of systems governed by a linear multidimensional time-varying equation, which are encountered in spacecraft dynamics, economics, demographics, and biological systems, gives attention the lemma dealing with L(inf) stability of an integral equation that results from the differential equation of the system under consideration. Using the proof of this lemma, the main result on L(inf) stability is derived according; a corollary of the theorem deals with constant coefficient systems perturbed by small periodic terms. (O.C.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The numbers and mean radio luminosities of giant radio galaxies (GRGs) have been calculated for redshifts up to z = 0.6, assuming a sensitivity limit of 1 Jy at 1 GHz for the observations. The estimates are obtained with a model for the beam propagation, first through the hot gaseaous halo around the parent galaxy, and thereafter, through the even hotter but less dense intergalactic medium. The model is able to accurately reproduce the observed numbers and mean radio luminosities of GRGs at redshifts of less than 0.1, and it predicts that a somewhat larger number of GRGs should be found at redshifts of greater than 0.1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research investigates the impacts of agricultural market liberalization on food security in developing countries and it evaluates the supply perspective of food security. This research theme is applied on the agricultural sector in Kenya and in Zambia by studying the role policies played in the maize sub-sector. An evaluation of selected policies introduced at the beginning of the 1980s is made, as well as an assessment of whether those policies influenced maize output. A theoretical model of agricultural production is then formulated to reflect cereal production in a developing country setting. This study begins with a review of the general framework and the aims of the structural adjustment programs and proceeds to their application in the maize sector in Kenya and Zambia. A literature review of the supply and demand synthesis of food security is presented with examples from various developing countries. Contrary to previous studies on food security, this study assesses two countries with divergent economic orientations. Agricultural sector response to economic and institutional policies in different settings is also evaluated. Finally, a dynamic time series econometric model is applied to assess the effects of policy on maize output. The empirical findings suggest a weak policy influence on maize output, but the precipitation and acreage variables stand out as core determinants of maize output. The policy dimension of acreage and how markets influence it is not discussed at length in this study. Due to weak land rights and tenure structures in these countries, the direct impact of policy change on land markets cannot be precisely measured. Recurring government intervention during the structural policy implementation period impeded efficient functioning of input and output markets, particularly in Zambia. Input and output prices of maize and fertilizer responded more strongly in Kenya than in Zambia, where the state often ceded to public pressure by revoking pertinent policy measures. These policy interpretations are based on the response of policy variables which are more responsive in Kenya than in Zambia. According the obtained regression results, agricultural markets in general, and the maize sub-sector in particular, responded more positively to implemented policies in Kenya, than in Zambia, which supported a more socialist economic system. It is observed in these results that in order for policies to be effective, sector and regional dimensions need to be considered. The regional and sector dimensions were not taken into account in the formulation and implementation of structural adjustment policies in the 1980s. It can be noted that countries with vibrant economic structures and institutions fared better than those which had a firm, socially founded system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an interactive map-based technique for designing single-input-single-output compliant mechanisms that meet the requirements of practical applications. Our map juxtaposes user-specifications with the attributes of real compliant mechanisms stored in a database so that not only the practical feasibility of the specifications can be discerned quickly but also modifications can be done interactively to the existing compliant mechanisms. The practical utility of the method presented here exceeds that of shape and size optimizations because it accounts for manufacturing considerations, stress limits, and material selection. The premise for the method is the spring-leverage (SL) model, which characterizes the kinematic and elastostatic behavior of compliant mechanisms with only three SL constants. The user-specifications are met interactively using the beam-based 2D models of compliant mechanisms by changing their attributes such as: (i) overall size in two planar orthogonal directions, separately and together, (ii) uniform resizing of the in-plane widths of all the beam elements, (iii) uniform resizing of the out-of-plane thick-nesses of the beam elements, and (iv) the material. We present a design software program with a graphical user interface for interactive design. A case-study that describes the design procedure in detail is also presented while additional case-studies are posted on a website. DOI:10.1115/1.4001877].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).