891 resultados para Design methods
Resumo:
BACKGROUND: Controversies exist regarding the indications for unicompartmental knee arthroplasty. The objective of this study is to report the mid-term results and examine predictors of failure in a metal-backed unicompartmental knee arthroplasty design. METHODS: At a mean follow-up of 60 months, 80 medial unicompartmental knee arthroplasties (68 patients) were evaluated. Implant survivorship was analyzed using Kaplan-Meier method. The Knee Society objective and functional scores and radiographic characteristics were compared before surgery and at final follow-up. A Cox proportional hazard model was used to examine the association of patient's age, gender, obesity (body mass index > 30 kg/m2), diagnosis, Knee Society scores and patella arthrosis with failure. RESULTS: There were 9 failures during the follow up. The mean Knee Society objective and functional scores were respectively 49 and 48 points preoperatively and 95 and 92 points postoperatively. The survival rate was 92% at 5 years and 84% at 10 years. The mean age was younger in the failure group than the non-failure group (p < 0.01). However, none of the factors assessed was independently associated with failure based on the results from the Cox proportional hazard model. CONCLUSION: Gender, pre-operative diagnosis, preoperative objective and functional scores and patellar osteophytes were not independent predictors of failure of unicompartmental knee implants, although high body mass index trended toward significance. The findings suggest that the standard criteria for UKA may be expanded without compromising the outcomes, although caution may be warranted in patients with very high body mass index pending additional data to confirm our results. LEVEL OF EVIDENCE: IV.
Resumo:
The use of high-quality quarried crushed rock aggregates is generally required to comply with current specifications for unbound granular materials (UGMs) in pavements. The source of these high-quality materials can be a long distance from the site, resulting in high transportation costs. The use of more local sources of marginal materials or the use of secondary aggregates is not allowed if they do not fully comply with existing specifications. These materials can, however, be assessed for their suitability for use in a pavement by considering performance criteria such as resistance to permanent deformation and degradation instead of relying on compliance with inflexible specifications. The final thickness of the asphalt cover and the pavement depth are governed by conventional pavement design methods, which consider the number of vehicle passes, subgrade strength, and some material property, commonly the California bearing ratio or resilient modulus. A pavement design method that includes as a design criterion an assessment of the resistance to deformation of a UGM in a pavement structure at a particular stress state is proposed. The particular stress state at which the aggregate is to perform in an acceptable way is related to the in situ stress, that is, the stress that the aggregate is anticipated to experience at a particular depth in the pavement. Because the stresses are more severe closer to the pavement surface, the aggregates should be better able to resist these stresses the closer they are laid to the surface in the pavement. This method was applied to two Northern Ireland aggregates of different quality (NI Good and NI Poor). The results showed that the NI Poor aggregate performed at an acceptable level with respect to permanent deformation, provided that a minimum of 70 mm of asphalt cover was provided. It was predicted that the NI Good material would require 60 mm of asphalt cover.
Resumo:
This paper presents an automated design framework for the development of individual part forming tools for a composite stiffener. The framework uses parametrically developed design geometries for both the part and its layup tool. The framework has been developed with a functioning user interface where part / tool combinations are passed to a virtual environment for utility based assessment of their features and assemblability characteristics. The work demonstrates clear benefits in process design methods with conventional design timelines reduced from hours and days to minutes and seconds. The methods developed here were able to produce a digital mock up of a component with its associated layup tool in less than 3 minutes. The virtual environment presenting the design to the designer for interactive assembly planning was generated in 20 seconds. Challenges still exist in determining the level of reality required to provide an effective learning environment in the virtual world. Full representation of physical phenomena such as gravity, part clashes and the representation of standard build functions require further work to represent real physical phenomena more accurately.
Resumo:
Esta tese descreve uma framework de trabalho assente no paradigma multi-camada para analisar, modelar, projectar e optimizar sistemas de comunicação. Nela se explora uma nova perspectiva acerca da camada física que nasce das relações entre a teoria de informação, estimação, métodos probabilísticos, teoria da comunicação e codificação. Esta framework conduz a métodos de projecto para a próxima geração de sistemas de comunicação de alto débito. Além disso, a tese explora várias técnicas de camada de acesso com base na relação entre atraso e débito para o projeto de redes sem fio tolerantes a atrasos. Alguns resultados fundamentais sobre a interação entre a teoria da informação e teoria da estimação conduzem a propostas de um paradigma alternativo para a análise, projecto e optimização de sistemas de comunicação. Com base em estudos sobre a relação entre a informação recíproca e MMSE, a abordagem descrita na tese permite ultrapassar, de forma inovadora, as dificuldades inerentes à optimização das taxas de transmissão de informação confiáveis em sistemas de comunicação, e permite a exploração da atribuição óptima de potência e estruturas óptimas de pre-codificação para diferentes modelos de canal: com fios, sem fios e ópticos. A tese aborda também o problema do atraso, numa tentativa de responder a questões levantadas pela enorme procura de débitos elevados em sistemas de comunicação. Isso é feito através da proposta de novos modelos para sistemas com codificação de rede (network coding) em camadas acima da sua camada física. Em particular, aborda-se a utilização de sistemas de codificação em rede para canais que variam no tempo e são sensíveis a atrasos. Isso foi demonstrado através da proposta de um novo modelo e esquema adaptativo, cujos algoritmos foram aplicados a sistemas sem fios com desvanecimento (fading) complexo, de que são exemplos os sistemas de comunicação via satélite. A tese aborda ainda o uso de sistemas de codificação de rede em cenários de transferência (handover) exigentes. Isso é feito através da proposta de novos modelos de transmissão WiFi IEEE 801.11 MAC, que são comparados com codificação de rede, e que se demonstram possibilitar transferência sem descontinuidades. Pode assim dizer-se que esta tese, através de trabalho de análise e de propostas suportadas por simulações, defende que na concepção de sistemas de comunicação se devem considerar estratégias de transmissão e codificação que sejam não só próximas da capacidade dos canais, mas também tolerantes a atrasos, e que tais estratégias têm de ser concebidas tendo em vista características do canal e a camada física.
Resumo:
Dissertation presented to obtain the Doutoramento (Ph.D.) degree in Biochemistry at the Instituto de Tecnologia Qu mica e Biol ogica da Universidade Nova de Lisboa
Resumo:
Cette recherche s’inscrit dans la continuité de celles entreprises en vue d’éclaircir la question du processus de design, et plus spécialement le design architectural de la maison. Elle cherche aussi à développer la réflexivité du designer sur les actes qu’il pose en lui offrant un point de vue depuis l’angle de la psychanalyse. Elle vient rallonger les initiatives amenées par la troisième génération des recherches sur les méthodologies du design en s’intéressant à un volet, jusque-là, peu exploré : le processus inconscient du design architectural. Elle pose comme problématique la question des origines inconscientes du travail créatif chez le concepteur en architecture. La création étant un des sujets importants de la psychanalyse, plusieurs concepts psychanalytiques, comme la sublimation freudienne, l’abordent et tentent de l’expliquer. Le design étant une discipline de création, la psychanalyse peut nous renseigner sur le processus du design, et nous offrir la possibilité de l’observer et de l’approcher. La métaphore architecturale, utilisée pour rendre la théorie freudienne, est aussi le champ d’application de plusieurs théories et concepts psychanalytiques. L’architecture en général, et celle de la maison en particulier, en ce que cette dernière comporte comme investissement émotionnel personnel de la part de son concepteur, constructeur ou utilisateur, offrent un terrain où plusieurs des concepts psychanalytiques peuvent être observés et appliqués. Cette recherche va approcher l’exemple architectural selon les concepts développés par les trois théories psychanalytiques les plus importantes : freudienne, lacanienne et jungienne. L’application de ces concepts se fait par une "autoanalyse" qui met le designer en double posture : celle du sujet de la recherche et celle du chercheur, ce qui favorise hautement la réflexivité voulue. La libre association, une des méthodes de la psychanalyse, sera la première étape qui enclenchera le processus d’autoanalyse et l’accompagnera dans son développement. S’appliquant sur le discours et la forme de la maison, la libre association va chercher à distinguer plusieurs mécanismes psychiques susceptibles d’éclairer notre investigation. Les résultats de l’application des concepts freudiens viendront servir de base pour l’application, par la suite, des concepts de la théorie lacanienne et jungienne. Au terme de cette analyse, nous serions en mesure de présenter une modélisation du processus inconscient du design qui aurait conduit à la création de la maison prise en exemple. Nous découvrirons par cela la nature du processus inconscient qui précède et accompagne le travail créatif du designer. Nous verrons aussi comment ce processus se nourrit des expériences du designer qui remontent jusqu’aux premières années de son enfance. Ceci permettrait de rendre compte de la possibilité d’appliquer les concepts psychanalytiques sur le design architectural et, par ce fait, permettre de déterminer les éventuels façons de concevoir l’apport de la psychanalyse à la pratique de cette discipline qu’est le design ainsi que son enseignement.
Resumo:
Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.
Resumo:
Salient pole brushless alternators coupled to IC engines are extensively used as stand-by power supply units for meeting in- dustrial power demands. Design of such generators demands high power to weight ratio, high e ciency and low cost per KVA out- put. Moreover, the performance characteristics of such machines like voltage regulation and short circuit ratio (SCR) are critical when these machines are put into parallel operation and alterna- tors for critical applications like defence and aerospace demand very low harmonic content in the output voltage. While designing such alternators, accurate prediction of machine characteristics, including total harmonic distortion (THD) is essential to mini- mize development cost and time. Total harmonic distortion in the output voltage of alternators should be as low as possible especially when powering very sophis- ticated and critical applications. The output voltage waveform of a practical AC generator is replica of the space distribution of the ux density in the air gap and several factors such as shape of the rotor pole face, core saturation, slotting and style of coil disposition make the realization of a sinusoidal air gap ux wave impossible. These ux harmonics introduce undesirable e ects on the alternator performance like high neutral current due to triplen harmonics, voltage distortion, noise, vibration, excessive heating and also extra losses resulting in poor e ciency, which in turn necessitate de-rating of the machine especially when connected to non-linear loads. As an important control unit of brushless alternator, the excitation system and its dynamic performance has a direct impact on alternator's stability and reliability. The thesis explores design and implementation of an excitation i system utilizing third harmonic ux in the air gap of brushless al- ternators, using an additional auxiliary winding, wound for 1=3rd pole pitch, embedded into the stator slots and electrically iso- lated from the main winding. In the third harmonic excitation system, the combined e ect of two auxiliary windings, one with 2=3rd pitch and another third harmonic winding with 1=3rd pitch, are used to ensure good voltage regulation without an electronic automatic voltage regulator (AVR) and also reduces the total harmonic content in the output voltage, cost e ectively. The design of the third harmonic winding by analytic methods demands accurate calculation of third harmonic ux density in the air gap of the machine. However, precise estimation of the amplitude of third harmonic ux in the air gap of a machine by conventional design procedures is di cult due to complex geome- try of the machine and non-linear characteristics of the magnetic materials. As such, prediction of the eld parameters by conven- tional design methods is unreliable and hence virtual prototyping of the machine is done to enable accurate design of the third har- monic excitation system. In the design and development cycle of electrical machines, it is recognized that the use of analytical and experimental methods followed by expensive and in exible prototyping is time consum- ing and no longer cost e ective. Due to advancements in com- putational capabilities over recent years, nite element method (FEM) based virtual prototyping has become an attractive al- ternative to well established semi-analytical and empirical design methods as well as to the still popular trial and error approach followed by the costly and time consuming prototyping. Hence, by virtually prototyping the alternator using FEM, the important performance characteristics of the machine are predicted. Design of third harmonic excitation system is done with the help of results obtained from virtual prototype of the machine. Third harmonic excitation (THE) system is implemented in a 45 KVA ii experimental machine and experiments are conducted to validate the simulation results. Simulation and experimental results show that by utilizing third harmonic ux in the air gap of the ma- chine for excitation purposes during loaded conditions, triplen harmonic content in the output phase voltage is signi cantly re- duced. The prototype machine with third harmonic excitation system designed and developed based on FEM analysis proved to be economical due to its simplicity and has the added advan- tage of reduced harmonics in the output phase voltage.
Resumo:
We compare a broad range of optimal product line design methods. The comparisons take advantage of recent advances that make it possible to identify the optimal solution to problems that are too large for complete enumeration. Several of the methods perform surprisingly well, including Simulated Annealing, Product-Swapping and Genetic Algorithms. The Product-Swapping heuristic is remarkable for its simplicity. The performance of this heuristic suggests that the optimal product line design problem may be far easier to solve in practice than indicated by complexity theory.
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Aircraft systems are highly nonlinear and time varying. High-performance aircraft at high angles of incidence experience undesired coupling of the lateral and longitudinal variables, resulting in departure from normal controlled flight. The aim of this work is to construct a robust closed-loop control that optimally extends the stable and decoupled flight envelope. For the study of these systems nonlinear analysis methods are needed. Previously, bifurcation techniques have been used mainly to analyze open-loop nonlinear aircraft models and investigate control effects on dynamic behavior. In this work linear feedback control designs calculated by eigenstructure assignment methods are investigated for a simple aircraft model at a fixed flight condition. Bifurcation analysis in conjunction with linear control design methods is shown to aid control law design for the nonlinear system.
H-infinity control design for time-delay linear systems: a rational transfer function based approach
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This paper presents a method to design membrane elements of concrete with orthogonal mesh of reinforcement which are subject to compressive stress. Design methods, in general, define how to quantify the reinforcement necessary to support the tension stress and verify if the compression in concrete is within the strength limit. In case the compression in membrane is excessive, it is possible to use reinforcements subject to compression. However, there is not much information in the literature about how to design reinforcement for these cases. For that, this paper presents a procedure which uses the model based on Baumann's [1] criteria. The strength limits used herein are those recommended by CEB [3], however, a model is proposed in which this limit varies according to the tensile strain which occur perpendicular to compression. This resistance model is based on concepts proposed by Vecchio e Collins [2].