883 resultados para Computer aided design
Resumo:
Piezoresistive materials, materials whose resistivity properties change when subjected to mechanical stresses, are widely utilized in many industries as sensors, including pressure sensors, accelerometers, inclinometers, and load cells. Basic piezoresistive sensors consist of piezoresistive devices bonded to a flexible structure, such as a cantilever or a membrane, where the flexible structure transmits pressure, force, or inertial force due to acceleration, thereby causing a stress that changes the resistivity of the piezoresistive devices. By applying a voltage to a piezoresistive device, its resistivity can be measured and correlated with the amplitude of an applied pressure or force. The performance of a piezoresistive sensor is closely related to the design of its flexible structure. In this research, we propose a generic topology optimization formulation for the design of piezoresistive sensors where the primary aim is high response. First, the concept of topology optimization is briefly discussed. Next, design requirements are clarified, and corresponding objective functions and the optimization problem are formulated. An optimization algorithm is constructed based on these formulations. Finally, several design examples of piezoresistive sensors are presented to confirm the usefulness of the proposed method.
Resumo:
Since the computer viruses pose a serious problem to individual and corporative computer systems, a lot of effort has been dedicated to study how to avoid their deleterious actions, trying to create anti-virus programs acting as vaccines in personal computers or in strategic network nodes. Another way to combat viruses propagation is to establish preventive policies based on the whole operation of a system that can be modeled with population models, similar to those that are used in epidemiological studies. Here, a modified version of the SIR (Susceptible-Infected-Removed) model is presented and how its parameters are related to network characteristics is explained. Then, disease-free and endemic equilibrium points are calculated, stability and bifurcation conditions are derived and some numerical simulations are shown. The relations among the model parameters in the several bifurcation conditions allow a network design minimizing viruses risks. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Thymidine monophosphate kinase (TMPK) has emerged as an attractive target for developing inhibitors of Mycobacterium tuberculosis growth. In this study the receptor-independent (RI) 4D-QSAR formalism has been used to develop QSAR models and corresponding 3D-pharmacophores for a set of 5`-thiourea-substituted alpha-thymidine inhibitors. Models were developed for the entire training set and for a subset of the training set consisting of the most potent inhibitors. The optimized (RI) 4D-QSAR models are statistically significant (r(2) = 0.90, q(2) = 0.83 entire set, r(2) = 0.86, q(2) = 0.80 high potency subset) and also possess good predictivity based on test set predictions. The most and least potent inhibitors, in their respective postulated active conformations derived from the models, were docked in the active site of the TMPK crystallographic structure. There is a solid consistency between the 3D-pharmacophore sites defined by the QSAR models and interactions with binding site residues. This model identifies new regions of the inhibitors that contain pharmacophore sites, such as the sugar-pyrimidine ring structure and the region of the 5`-arylthiourea moiety. These new regions of the ligands can be further explored and possibly exploited to identify new, novel, and, perhaps, better antituberculosis inhibitors of TMPKmt. Furthermore, the 3D-pharmacophores defined by these models can be used as a starting point for future receptor-dependent antituberculosis drug design as well as to elucidate candidate sites for substituent addition to optimize ADMET properties of analog inhibitors.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
Objective: The objectives were to determine the postural consequences of varying computer monitor height and to describe self-selected monitor heights and postures. Design: The design involved experimental manipulation of computer monitor height, description of self-selected heights, and measurement of posture and gaze angles. Background. Disagreement exists with regard to the appropriate height of computer monitors. It is known that users alter both head orientation and gaze angle in response to changes in monitor height; however the relative contribution of atlanto-occipital and cervical flexion to the change in head rotation is unknown. No information is available with regard to self-selected monitor heights. Methods. Twelve students performed a tracking task with the monitor placed at three different heights. The subjects then completed eight trials in which monitor height was first self-selected. Sagittal postural and gaze angle data were determined by digitizing markers defining a two-dimensional three-link model of the trunk, cervical spine and head. Results. The 27 degrees change in monitor height imposed was, on average, accommodated by 18 degrees of head inclination and a 9 degrees change in gaze angle relative to the head. The change in head inclination was achieved by a 6 degrees change in trunk inclination, a 4 degrees change in cervical flexion, and a 7 degrees change in atlanto-occipital flexion. The self-selected height varied depending on the initial monitor height and inclination. Conclusions. Self-selected monitor heights were lower than current 'eye-level' recommendations. Lower monitor heights are likely to reduce both visual and musculoskeletal discomfort. Relevance Musculoskeletal and visual discomfort may be reduced by placing computer monitors lower than currently recommended. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Inhibitors of proteolytic enzymes (proteases) are emerging as prospective treatments for diseases such as AIDS and viral infections, cancers, inflammatory disorders, and Alzheimer's disease. Generic approaches to the design of protease inhibitors are limited by the unpredictability of interactions between, and structural changes to, inhibitor and protease during binding. A computer analysis of superimposed crystal structures for 266 small molecule inhibitors bound to 48 proteases (16 aspartic, 17 serine, 8 cysteine, and 7 metallo) provides the first conclusive proof that inhibitors, including substrate analogues, commonly bind in an extended beta-strand conformation at the active sites of all these proteases. Representative superimposed structures are shown for (a) multiple inhibitors bound to a protease of each class, (b) single inhibitors each bound to multiple proteases, and (c) conformationally constrained inhibitors bound to proteases. Thus inhibitor/substrate conformation, rather than sequence/composition alone, influences protease recognition, and this has profound implications for inhibitor design. This conclusion is supported by NMR, CD, and binding studies for HIV-1 protease inhibitors/ substrates which, when preorganized in an extended conformation, have significantly higher protease affinity. Recognition is dependent upon conformational equilibria since helical and turn peptide conformations are not processed by proteases. Conformational selection explains the resistance of folded/structured regions of proteins to proteolytic degradation, the susceptibility of denatured proteins to processing, and the higher affinity of conformationally constrained 'extended' inhibitors/substrates for proteases. Other approaches to extended inhibitor conformations should similarly lead to high-affinity binding to a protease.
Resumo:
The Multicenter Australian Study of Epidural Anesthesia and Analgesia in Major Surgery (The MASTER Trial) was designed to evaluate the possible benefit of epidural block in improving outcome in high-risk patients. The trial began in 1995 and is scheduled to reach the planned sample size of 900 during 2001. This paper describes the trial design and presents data comparing 455 patients randomized in 21 institutions in Australia, Hong Kong, and Malaysia, with 237 patients from the same hospitals who were eligible but not randomized. Nine categories of high-risk patients were defined as entry criteria for the trial. Protocols for ethical review, informed consent, randomization, clinical anesthesia and analgesia, and perioperative management were determined following extensive consultation with anesthesiologists throughout Australia. Clinical and research information was collected in participating hospitals by research staff who may not have been blind to allocation. Decisions about the presence or absence of endpoints were made primarily by a computer algorithm, supplemented by blinded clinical experts. Without unblinding the trial, comparison of eligibility criteria and incidence of endpoints between randomized and nonrandomized patients showed only small differences. We conclude that there is no strong evidence of important demographic or clinical differences between randomized and nonrandomized patients eligible for the MASTER Trial. Thus, the trial results are likely to be broadly generalizable. Control Clin Trials 2000;21:244-256 (C) Elsevier Science Inc. 2000.
Resumo:
In this paper, a new v-metric based approach is proposed to design decentralized controllers for multi-unit nonlinear plants that admit a set of plant decompositions in an operating space. Similar to the gap metric approach in literature, it is shown that the operating space can also be divided into several subregions based on a v-metric indicator, and each of the subregions admits the same controller structure. A comparative case study is presented to display the advantages of proposed approach over the gap metric approach. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The simultaneous design of the steady-state and dynamic performance of a process has the ability to satisfy much more demanding dynamic performance criteria than the design of dynamics only by the connection of a control system. A method for designing process dynamics based on the use of a linearised systems' eigenvalues has been developed. The eigenvalues are associated with system states using the unit perturbation spectral resolution (UPSR), characterising the dynamics of each state. The design method uses a homotopy approach to determine a final design which satisfies both steady-state and dynamic performance criteria. A highly interacting single stage forced circulation evaporator system, including control loops, was designed by this method with the goal of reducing the time taken for the liquid composition to reach steady-state. Initially the system was successfully redesigned to speed up the eigenvalue associated with the liquid composition state, but this did not result in an improved startup performance. Further analysis showed that the integral action of the composition controller was the source of the limiting eigenvalue. Design changes made to speed up this eigenvalue did result in an improved startup performance. The proposed approach provides a structured way to address the design-control interface, giving significant insight into the dynamic behaviour of the system such that a systematic design or redesign of an existing system can be undertaken with confidence.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
In this paper we propose a new framework for evaluating designs based on work domain analysis, the first phase of cognitive work analysis. We develop a rationale for a new approach to evaluation by describing the unique characteristics of complex systems and by showing that systems engineering techniques only partially accommodate these characteristics. We then present work domain analysis as a complementary framework for evaluation. We explain this technique by example by showing how the Australian Defence Force used work domain analysis to evaluate design proposals for a new system called Airborne Early Warning and Control. This case study also demonstrates that work domain analysis is a useful and feasible approach that complements standard techniques for evaluation and that promotes a central role for human factors professionals early in the system design and development process. Actual or potential applications of this research include the evaluation of designs for complex systems.
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
Objectives: Lung hyperinflation may be assessed by computed tomography (CT). As shown for patients with emphysema, however, CT image reconstruction affects quantification of hyperinflation. We studied the impact of reconstruction parameters on hyperinflation measurements in mechanically ventilated (MV) patients. Design: Observational analysis. Setting: A University hospital-affiliated research Unit. Patients: The patients were MV patients with injured (n = 5) or normal lungs (n = 6), and spontaneously breathing patients (n = 5). Interventions: None. Measurements and results: Eight image series involving 3, 5, 7, and 10 mm slices and standard and sharp filters were reconstructed from identical CT raw data. Hyperinflated (V-hyper), normally (V-normal), poorly (V-poor), and nonaerated (V-non) volumes were calculated by densitometry as percentage of total lung volume (V-total). V-hyper obtained with the sharp filter systematically exceeded that with the standard filter showing a median (interquartile range) increment of 138 (62-272) ml corresponding to approximately 4% of V-total. In contrast, sharp filtering minimally affected the other subvolumes (V-normal, V-poor, V-non, and V-total). Decreasing slice thickness also increased V-hyper significantly. When changing from 10 to 3 mm thickness, V-hyper increased by a median value of 107 (49-252) ml in parallel with a small and inconsistent increment in V-non of 12 (7-16) ml. Conclusions: Reconstruction parameters significantly affect quantitative CT assessment of V-hyper in MV patients. Our observations suggest that sharp filters are inappropriate for this purpose. Thin slices combined with standard filters and more appropriate thresholds (e.g., -950 HU in normal lungs) might improve the detection of V-hyper. Different studies on V-hyper can only be compared if identical reconstruction parameters were used.
Resumo:
We have designed, built, and tested an early prototype of a novel subxiphoid access system intended to facilitate epicardial electrophysiology, but with possible applications elsewhere in the body. The present version of the system consists of a commercially available insertion needle, a miniature pressure sensor and interconnect tubing, read-out electronics to monitor the pressures measured during the access procedure, and a host computer with user-interface software. The nominal resolution of the system is <0.1 mmHg, and it has deviations from linearity of <1%. During a pilot series of human clinical studies with this system, as well as in an auxiliary study done with an independent method, we observed that the pericardial space contained pressure-frequency components related to both the heart rate and respiratory rate, while the thorax contained components related only to the respiratory rate, a previously unobserved finding that could facilitate access to the pericardial space. We present and discuss the design principles, details of construction, and performance characteristics of this system.