959 resultados para Space use


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To evaluate drug interaction software programs and determine their accuracy in identifying drug-drug interactions that may occur in intensive care units. Setting The study was developed in Brazil. Method Drug interaction software programs were identified through a bibliographic search in PUBMED and in LILACS (database related to the health sciences published in Latin American and Caribbean countries). The programs` sensitivity, specificity, and positive and negative predictive values were determined to assess their accuracy in detecting drug-drug interactions. The accuracy of the software programs identified was determined using 100 clinically important interactions and 100 clinically unimportant ones. Stockley`s Drug Interactions 8th edition was employed as the gold standard in the identification of drug-drug interaction. Main outcome Sensitivity, specificity, positive and negative predictive values. Results The programs studied were: Drug Interaction Checker (DIC), Drug-Reax (DR), and Lexi-Interact (LI). DR displayed the highest sensitivity (0.88) and DIC showed the lowest (0.69). A close similarity was observed among the programs regarding specificity (0.88-0.92) and positive predictive values (0.88-0.89). The DIC had the lowest negative predictive value (0.75) and DR the highest (0.91). Conclusion The DR and LI programs displayed appropriate sensitivity and specificity for identifying drug-drug interactions of interest in intensive care units. Drug interaction software programs help pharmacists and health care teams in the prevention and recognition of drug-drug interactions and optimize safety and quality of care delivered in intensive care units.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This letter presents some notes on the use of the Gram matrix in observability analysis. This matrix is constructed considering the rows of the measurement Jacobian matrix as vectors, and it can be employed in observability analysis and restoration methods. The determination of nonredundant pseudo-measurements (normally injections pseudo-measurements) for merging observable islands into an observable (single) system is carried out analyzing the pivots of the Gram matrix. The Gram matrix can also be used to verify local redundancy, which is important in measurement system planning. Some numerical examples` are used to illustrate these features. Others features of the Gram matrix are under study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to formulate and investigate the application of various nonlinear H(infinity) control methods to a fiee-floating space manipulator subject to parametric uncertainties and external disturbances. From a tutorial perspective, a model-based approach and adaptive procedures based on linear parametrization, neural networks and fuzzy systems are covered by this work. A comparative study is conducted based on experimental implementations performed with an actual underactuated fixed-base planar manipulator which is, following the DEM concept, dynamically equivalent to a free-floating space manipulator. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Converting aeroelastic vibrations into electricity for low power generation has received growing attention over the past few years. In addition to potential applications for aerospace structures, the goal is to develop alternative and scalable configurations for wind energy harvesting to use in wireless electronic systems. This paper presents modeling and experiments of aeroelastic energy harvesting using piezoelectric transduction with a focus on exploiting combined nonlinearities. An airfoil with plunge and pitch degrees of freedom (DOF) is investigated. Piezoelectric coupling is introduced to the plunge DOF while nonlinearities are introduced through the pitch DOF. A state-space model is presented and employed for the simulations of the piezoaeroelastic generator. A two-state approximation to Theodorsen aerodynamics is used in order to determine the unsteady aerodynamic loads. Three case studies are presented. First the interaction between piezoelectric power generation and linear aeroelastic behavior of a typical section is investigated for a set of resistive loads. Model predictions are compared to experimental data obtained from the wind tunnel tests at the flutter boundary. In the second case study, free play nonlinearity is added to the pitch DOF and it is shown that nonlinear limit-cycle oscillations can be obtained not only above but also below the linear flutter speed. The experimental results are successfully predicted by the model simulations. Finally, the combination of cubic hardening stiffness and free play nonlinearities is considered in the pitch DOF. The nonlinear piezoaeroelastic response is investigated for different values of the nonlinear-to-linear stiffness ratio. The free play nonlinearity reduces the cut-in speed while the hardening stiffness helps in obtaining persistent oscillations of acceptable amplitude over a wider range of airflow speeds. Such nonlinearities can be introduced to aeroelastic energy harvesters (exploiting piezoelectric or other transduction mechanisms) for performance enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to find relations between the socioeconomic characteristics, activity participation, land use patterns and travel behavior of the residents in the Sao Paulo Metropolitan Area (SPMA) by using Exploratory Multivariate Data Analysis (EMDA) techniques. The variables influencing travel pattern choices are investigated using: (a) Cluster Analysis (CA), grouping and characterizing the Traffic Zones (17), proposing the independent variable called Origin Cluster and, (b) Decision Tree (DT) to find a priori unknown relations among socioeconomic characteristics, land use attributes of the origin TZ and destination choices. The analysis was based on the origin-destination home-interview survey carried out in SPMA in 1997. The DT application revealed the variables of greatest influence on the travel pattern choice. The most important independent variable considered by DT is car ownership, followed by the Use of Transportation ""credits"" for Transit tariff, and, finally, activity participation variables and Origin Cluster. With these results, it was possible to analyze the influence of a family income, car ownership, position of the individual in the family, use of transportation ""credits"" for transit tariff (mainly for travel mode sequence choice), activities participation (activity sequence choice) and Origin Cluster (destination/travel distance choice). (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the use of optimization techniques in the design of a steel riser. Two methods are used: the genetic algorithm, which imitates the process of natural selection, and the simulated annealing, which is based on the process of annealing of a metal. Both of them are capable of searching a given solution space for the best feasible riser configuration according to predefined criteria. Optimization issues are discussed, such as problem codification, parameter selection, definition of objective function, and restrictions. A comparison between the results obtained for economic and structural objective functions is made for a case study. Optimization method parallelization is also addressed. [DOI: 10.1115/1.4001955]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Static mixers with improved performance were developed from CFD simulations in a stepwise approach. The relevant geometric features of simple mixer designs and the corresponding mixing mechanisms-laminar shear, elongational flow, and distributive mixing-were identified first. This information was used to formulate guidelines for the development of new geometries. The solid elements of the static mixer should: (a) provide restrictions to the flow; (b) deflect the flow; (c) be sequentially rotated around the flow direction to provide symmetry; (d) extend from the center of the pipe to the vicinity of the walls to avoid short-circuiting; and (e) distribute and remix the flow. Based on these guidelines, two improved mixer designs were developed: the DS A-I mixer has a good mixing efficiency and an acceptable pressure drop; the Fins 35 degrees mixer is more efficient and compact, but requires a larger pressure drop. Their performance indicates that their use is possible on industrial applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, a study on the role of the long-range term of excess Gibbs energy models in the modeling of aqueous systems containing polymers and salts is presented. Four different approaches on how to account for the presence of polymer in the long-range term were considered, and simulations were conducted considering aqueous solutions of three different salts. The analysis of water activity curves showed that, in all cases, a liquid-phase separation may be introduced by the sole presence of the polymer in the long-range term, regardless of how it is taken into account. The results lead to the conclusion that there is no single exact solution for this problem, and that any kind of approach may introduce inconsistencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to lower the excessive costs of metallic prosthesis materia Is alternatives to Ti and Ti alloys have been searched. in this study, the corrosion resistance of the DIN 1.4575 superferritic stainless steel, either solution annealed or solution annealed and aged at 475 degrees C for periods varying from 100 to 1080 h, was investigated by electrochemical impedance spectroscopy (EIS) and potentiodynamic polarization methods in Hanks` solution. The solution annealed and the aged for 1080 h samples were also tested using scanning electrochemical microscopy (SECM) in a 0.1 mol/L NaCl solution at 25 degrees C. The EIS results showed that the corrosion resistance of the DIN 1.4575 steel decreases with heat treatment time at 475 degrees C probably due to alpha prime formation. Besides the diminution of the overall impedance values, the low frequency limit of the Nyquist diagrams show a progressive change from an almost capacitive response to a resistive behavior as the heat treatment time increases. Pitting corrosion resistance also decreased with aging time at 475 degrees C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solar driven photo-Fenton process for treating water containing phenol as a contaminant has been evaluated by means of pilot-scale experiments with a parabolic trough solar reactor (PTR). The effects of Fe(II) (0.04-1.0 mmol L(-1)), H(2)O(2) (7-270 mmol L(-1)), initial phenol concentration (100 and 500 mg C L(-1)), solar radiation, and operation mode (batch and fed-batch) on the process efficiency were investigated. More than 90% of the dissolved organic carbon (DOC) was removed within 3 hours of irradiation or less, a performance equivalent to that of artificially-irradiated reactors, indicating that solar light can be used either as an effective complementary or as an alternative source of photons for the photo-Fenton degradation process. A non-linear multivariable model based on a neural network was fit to the experimental results of batch-mode experiments in order to evaluate the relative importance of the process variables considered on the DOC removal over the reaction time. This included solar radiation, which is not a controlled variable. The observed behavior of the system in batch-mode was compared with fed-batch experiments carried out under similar conditions. The main contribution of the study consists of the results from experiments under different conditions and the discussion of the system behavior. Both constitute important information for the design and scale-up of solar radiation-based photodegradation processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.