899 resultados para Design methods
Resumo:
Optics and LEDs, design Methods, design examples, conclusions
Resumo:
El ruido es un contaminante que cada vez va tomando más importancia por ser un tipo de contaminación propio de las sociedades modernas. Su estudio en tanto su origen y efectos en la población, ha sido objeto de numerosas lÃneas de investigación en el mundo, siendo una de las herramientas más utilizadas las encuestas de percepción de molestias realizadas en numerosos paÃses y de variados diseños metodológicos. Sin embargo, la profusa información recolectada por muchos años y por distintos paÃses por medio de estas herramientas no permite obtener datos relacionados entre ellos para estudiar las distintas condiciones de exposición a distintos tipos de fuentes de ruido, dado que responden a diseños de instrumentos de recolección de datos (encuestas) distintos y, aún más, a escalas de mediciones distintas. Un esfuerzo por generar datos y estudios que puedan ser relacionados para contribuir a mayor conocimiento del ruido y sus efectos en la salud de la población, es la iniciativa desarrollada por el ICBEN (International Commission of Biological Effects on Noise) que desde el año 1997 un grupo de investigadores se ha propuesto generar estudios e investigaciones bajo una metodologÃa que permita comparar los diferentes estudios que se realicen y que sus conclusiones permitan ser utilizadas por el resto y asà generar nuevos conocimientos y validar dichos resultados. Uno de los problemas que en la actualidad se tiene es que las encuestas que se utilizan en distintos paÃses, e incluso en ellos mismos, no sólo incorporan preguntas de diversas modalidades y estilos, sino que además para las respuestas que entregan los encuestados se utilizan una variedad de métodos y escalas que no permiten ser extrapolados o comparados con otros estudios. Dado que las escalas que algunos estudios utilizan, para conocer el grado de molestia que presenta la población, sin verbales, las lÃneas de investigación y recomendaciones internacionales sugieren que cada paÃs desarrolle sus propias escalas y que no se remitan sólo a traducir aquellas utilizadas en otros idiomas. Esto debido que cada palabra que se utilice en las escalas encierra un concepto metrológico, esto es, que las personas le asignan un valor especÃfico de intensidad en el continuo de grados de molestia, por lo que deben ser obtenidas para cada población bajo estudio. Esta investigación, siguiendo las recomendaciones internacionales, ha obtenido una escala verbal para ser utilizada en encuestas de percepción subjetiva de molestias por ruido en Chile, para una población de entre 15 a 65 años de edad. Se ha podido desarrollar un test para obtener dicha escala siguiendo la metodologÃa internacional, lo que ha permitido además discutir y analizar su utilización en futuras investigaciones. Del mismo modo, se ha podido analizar la equivalencia que tiene con otras escalas que podrÃan ser obtenidas con esta misma metodologÃa, asà como la escala que la recomendación internacional ISO propone para el idioma español. Los resultados obtenidos permiten comprobar la necesidad de que cada paÃs hispanoparlante desarrolle sus propias escalas lo que podrÃa explicar, entre otros aspectos, del por qué la norma internacional aún no tiene caracterÃsticas de oficial a falta de mayor consenso en la comunidad cientÃfica, por lo que los datos aportados en esta investigación permitirán profundizar estos y otros aspectos a los investigadores en esta materia. ABSTRACT Noise is considered as a pollutant that it is growing in importance because it has become recognized as a major annoyance in modern societies. The study of its origin and effects on populations has been the objective of numerous lines of investigation all over the world. The annoyance questionnaire is a tool that is frequently utilized in numerous countries using a variety of design methods. Nevertheless, the quantity of information collected over several years in different countries using these tools has not allowed an interrelated correlation of data because the data indicates that the different study tools designed to collect data have used varying collection techniques and measurement scales that define the exposure to different types and sources of noise pollution. In an effort to generate data and studies then can be combined to better the understanding of noise and its effects on public health, an initiative that was developed by ICBEN (International Commission of Biological Effects on Noise) was proposed in 1997 and is designed to generate and investigate studies under methodologies that allow the comparisons of different studies already undertaken and the coordination of their conclusions. These investigative tools can be used by other investigators who will be able to generate additional information and validates results. One of the currently existing problems is that the questionnaires utilized by different countries and within the same countries not only use questions based on different models and styles, but also the answers given by the subjects being interviewed are interpreted using different methods and measurement scales which do not permit direct comparison of relationship with other studies. Given the measurement tools used to obtain the reported levels of annoyance that the population experiences (without verbal response) the lines of investigation and international recommendations are that each country develops its own scale that does not rely only on verbal responses in the corresponding language. This is because each word utilized in the measurement scale has a numeric concept with which interview subjects will assign a specific numeric intensity from the continuum of levels of annoyance and must be obtained for each population under study. This investigation, which follows international recommendations, has obtained a verbal scale to be utilized in subjective perception questionnaires of noise annoyance in Chile, for a subject population of subjects between 15 and 65 years of age. It has been able to develop a test to obtain said measurement scale following the international methodology guidelines which has permitted discussion and analysis of their use by other investigators. By the same token, it has been possible to analyze the equivalency that exists between other scales that could be obtained with the same methodology including the scale as recommended by international ISO for Spanish Language. The results obtained allows proof of the necessity that each Spanish Speaking Country develop their own measurement scales which could explain, the reason why the international norm still does not have the official characteristics because of a failure of a majority consensus in the scientific community, by which the data contribution from this investigation will allow the investigator to deepen these and other aspects of the subject.
Resumo:
En esta tesis se aborda el problema de la modelización, análisis y optimización de pórticos metálicos planos de edificación frente a los estados lÃmites último y de servicio. El objetivo general es presentar una técnica secuencial ordenada de optimización discreta para obtener el coste mÃnimo de pórticos metálicos planos de edificación, teniendo en cuenta las especificaciones del EC-3, incorporando las uniones semirrÃgidas y elementos no prismáticos en el proceso de diseño. Asimismo se persigue valorar su grado de influencia sobre el diseño final. El horizonte es extraer conclusiones prácticas que puedan ser de utilidad y aplicación simple para el proyecto de estructuras metálicas. La cantidad de publicaciones técnicas y cientÃficas sobre la respuesta estructural de entramados metálicos es inmensa; por ello se ha hecho un esfuerzo intenso en recopilar el estado actual del conocimiento, sobre las lÃneas y necesidades actuales de investigación. Se ha recabado información sobre los métodos modernos de cálculo y diseño, sobre los factores que influyen sobre la respuesta estructural, sobre técnicas de modelización y de optimización, al amparo de las indicaciones que algunas normativas actuales ofrecen sobre el tema. En esta tesis se ha desarrollado un procedimiento de modelización apoyado en el método de los elementos finitos implementado en el entorno MatLab; se han incluido aspectos claves tales como el comportamiento de segundo orden, la comprobación ante inestabilidad y la búsqueda del óptimo del coste de la estructura frente a estados lÃmites, teniendo en cuenta las especificaciones del EC-3. También se ha modelizado la flexibilidad de las uniones y se ha analizado su influencia en la respuesta de la estructura y en el peso y coste final de la misma. Se han ejecutado algunos ejemplos de aplicación y se ha contrastado la validez del modelo con resultados de algunas estructuras ya analizadas en referencias técnicas conocidas. Se han extraÃdo conclusiones sobre el proceso de modelización y de análisis, sobre la repercusión de la flexibilidad de las uniones en la respuesta de la estructura. El propósito es extraer conclusiones útiles para la etapa de proyecto. Una de las principales aportaciones del trabajo en su enfoque de optimización es la incorporación de una formulación de elementos no prismáticos con uniones semirrÃgidas en sus extremos. Se ha deducido una matriz de rigidez elástica para dichos elementos. Se ha comprobado su validez para abordar el análisis no lineal; para ello se han comparado los resultados con otros obtenidos tras aplicar otra matriz deducida analÃticamente existente en la literatura y también mediante el software comercial SAP2000. Otra de las aportaciones de esta tesis es el desarrollo de un método de optimización del coste de pórticos metálicos planos de edificación en el que se tienen en cuenta aspectos tales como las imperfecciones, la posibilidad de incorporar elementos no prismáticos y la caracterización de las uniones semirrÃgidas, valorando la influencia de su flexibilidad sobre la respuesta de la estructura. AsÃ, se han realizado estudios paramétricos para valorar la sensibilidad y estabilidad de las soluciones obtenidas, asà como rangos de validez de las conclusiones obtenidas. This thesis deals with the problems of modelling, analysis and optimization of plane steel frames with regard to ultimate and serviceability limit states. The objective of this work is to present an organized sequential technique of discrete optimization for achieving the minimum cost of plane steel frames, taking into consideration the EC-3 specifications as well as including effects of the semi-rigid joints and non-prismatic elements in the design process. Likewise, an estimate of their influence on the final design is an aim of this work. The final objective is to draw practical conclusions which can be handful and easily applicable for a steel-structure project. An enormous amount of technical and scientific publications regarding steel frames is currently available, thus making the achievement of a comprehensive and updated knowledge a considerably hard task. In this work, a large variety of information has been gathered and classified, especially that related to current research lines and needs. Thus, the literature collected encompasses references related to state-of-the-art design methods, factors influencing the structural response, modelling and optimization techniques, as well as calculation and updated guidelines of some steel Design Codes about the subject. In this work a modelling procedure based on the finite element implemented within the MatLab programming environment has been performed. Several keys aspects have been included, such as second order behaviour, the safety assessment against structural instability and the search for an optimal cost considering the limit states according to EC-3 specifications. The flexibility of joints has been taken into account in the procedure hereby presented; its effects on the structural response, on the optimum weight and on the final cost have also been analysed. In order to confirm the validity and adequacy of this procedure, some application examples have been carried out. The results obtained were compared with those available from other authors. Several conclusions about the procedure that comprises modelling, analysis and design stages, as well as the effect of the flexibility of connections on the structural response have been drawn. The purpose is to point out some guidelines for the early stages of a project. One of the contributions of this thesis is an attempt for optimizing plane steel frames in which both non-prismatic beam-column-type elements and semi-rigid connections have been considered. Thus, an elastic stiffness matrix has been derived. Its validity has been tested through comparing its accuracy with other analytically-obtained matrices available in the literature, and with results obtained by the commercial software SAP2000. Another achievement of this work is the development of a method for cost optimization of plane steel building frames in which some relevant aspects have been taken in consideration. These encompass geometric imperfections, non-prismatic beam elements and the numerical characterization of semi-rigid connections, evaluating the effect of its flexibility on the structural response. Hence, some parametric analyses have been performed in order to assess the sensitivity, the stability of the outcomes and their range of applicability as well.
Resumo:
The history of Software Engineering has been marked by many famous project failures documented in papers, articles and books. This pattern of lack of success has prompted the creation of dozens of software analysis, requirements definition, and design methods, programming languages, software development environments and software development processes all promoted as solving ?the software problem.? What we hear less about are software projects that were successful. This article reports on the findings of an extensive analysis of successful software projects that have been reported in the literature. It discusses the different interpretations of success and extracts the characteristics that successful projects have in common. These characteristics provide Software Project Managers with an agenda of topics to be addressed that will help ensure, not guarantee, that their software project will be successful.
Resumo:
Os procedimentos de dosagem Marshall e Superpave definem os teores de ligante de projeto baseados em parâmetros volumétricos. Nessa situação, sistemáticas de dosagem com tipos de compactação diferentes podem conduzir a teores de ligante de projeto distintos que definirão a vida útil dos revestimentos asfálticos. O objetivo principal desse trabalho é avaliar o comportamento mecânico de misturas asfálticas moldadas por diferentes métodos de compactação de laboratório e analisar a relação com os resultados de amostras obtidas a partir de misturas compactadas por rolagem pneumática na mesa compactadora francesa. A fase experimental consistiu na dosagem de misturas pelos métodos Marshall e Superpave (este último com dois tamanhos de moldes), além da compactação na Prensa de Cisalhamento Giratória (PCG) e da moldagem de placas na mesa compactadora. Avaliou-se o efeito do tipo de compactação, do tamanho do molde e do número de giros do Compactador Giratório Superpave (CGS) no teor de projeto, nos parâmetros volumétricos, no comportamento mecânico e no desempenho quanto à fadiga e à resistência ao afundamento em trilha de roda. Adicionalmente, foi avaliada a eficiência do método Bailey de composição granulométrica quanto à resistência à deformação permanente em função do tipo de agregado. Constatou-se que o método Bailey, por si só, não garante resistência à deformação permanente, sendo essa dependente do tipo de agregado incluindo seus parâmetros de forma. O principal produto da pesquisa, com efeitos práticos no projeto de misturas asfálticas, traduz-se na recomendação do método Superpave com molde de 100 mm (para TMN <= 12,5 mm) para volume de tráfego médio a alto em detrimento ao método Superpave com 150 mm, tendo em vista que o primeiro apresenta densificação mais semelhante às amostras preparadas na compactação por rolagem (similar ao que ocorre em pista) o que resulta em comportamento mecânico também mais próximo da realidade de campo. A utilização dos moldes de 150 mm de diâmetro no CGS pode ser viabilizada desde que se adote um número de giros menor do aquele proposto para projeto pelo Asphalt Institute (2001). Por fim, é fundamental que os ensaios e os cálculos para obtenção dos parâmetros volumétricos e escolha do teor de projeto sigam ao normatizado pela ASTM, pelo Asphalt Institute (2001) e pela ABNT.
Resumo:
This multidisciplinary study concerns the optimal design of processes with a view to both maximizing profit and minimizing environmental impacts. This can be achieved by a combination of traditional chemical process design methods, measurements of environmental impacts and advanced mathematical optimization techniques. More to the point, this paper presents a hybrid simulation-multiobjective optimization approach that at once optimizes the production cost and minimizes the associated environmental impacts of isobutane alkylation. This approach has also made it possible to obtain the flowsheet configurations and process variables that are needed to manufacture isooctane in a way that satisfies the above-stated double aim. The problem is formulated as a Generalized Disjunctive Programming problem and solved using state-of-the-art logic-based algorithms. It is shown, starting from existing alternatives for the process, that it is possible to systematically generate a superstructure that includes alternatives not previously considered. The optimal solution, in the form a Pareto curve, includes different structural alternatives from which the most suitable design can be selected. To evaluate the environmental impact, Life Cycle Assessment based on two different indicators is employed: Ecoindicator 99 and Global Warming Potential.
Resumo:
PURPOSE: To evaluate and compare the visual, refractive, contrast sensitivity, and aberrometric outcomes with a diffractive bifocal and trifocal intraocular lens (IOL) of the same material and haptic design. METHODS: Sixty eyes of 30 patients undergoing bilateral cataract surgery were enrolled and randomly assigned to one of two groups: the bifocal group, including 30 eyes implanted with the bifocal diffractive IOL AT LISA 801 (Carl Zeiss Meditec, Jena, Germany), and the trifocal group, including eyes implanted with the trifocal diffractive IOL AT LISA tri 839 MP (Carl Zeiss Meditec). Analysis of visual and refractive outcomes, contrast sensitivity, ocular aberrations (OPD-Scan III; Nidek, Inc., Gagamori, Japan), and defocus curve were performed during a 3-month follow-up period. RESULTS: No statistically significant differences between groups were found in 3-month postoperative uncorrected and corrected distance visual acuity (P > .21). However, uncorrected, corrected, and distance-corrected near and intermediate visual acuities were significantly better in the trifocal group (P < .01). No significant differences between groups were found in postoperative spherical equivalent (P = .22). In the binocular defocus curve, the visual acuity was significantly better for defocus of -0.50 to -1.50 diopters in the trifocal group (P < .04) and -3.50 to -4.00 diopters in the bifocal group (P < .03). No statistically significant differences were found between groups in most of the postoperative corneal, internal, and ocular aberrations (P > .31), and in contrast sensitivity for most frequencies analyzed (P > .15). CONCLUSIONS: Trifocal diffractive IOLs provide significantly better intermediate vision over bifocal IOLs, with equivalent postoperative levels of visual and ocular optical quality.
Resumo:
Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materials like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood burns quite easily and produces a great deal of heat energy. The main disadvantage is the high level of combustion when exposed to fire, where the point at which it catches fire is from 200–400°C. After fire exposure, is need to determine if the charred wooden structures are safe for future use. Design methods require the use of computer modelling to predict the fire exposure and the capacity of structures to resist those action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood structures exposed to fire, because predicts the charring rate as a function of fire exposure. The charring rate calculation of most structural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materials. In this work, the authors present different case studies using numerical models, that will help professionals analysing woods elements and the type of information needed to decide whether the charred structures are adequate or not to use. Different thermal models representing wooden cellular slabs, used in building construction for ceiling or flooring compartments, will be analysed and submitted to different fire scenarios (with the standard fire curve exposure). The same numerical models, considering insulation material inside the wooden cellular slabs, will be tested to compare and determine the fire time resistance and the charring rate calculation.
Resumo:
Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materiais like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood does bum quite easily md produces a great deal ofheat energy. The main disadvantage is the high levei ofcombustion when exposed to fÃre, where the point at which it catches fire is fi-om 200-400°C. After fu-e exposure, is need to determine if the charred wooden stmctures are safe for future use. Design methods require the use ofcomputer modelling to predict the fÃre exposure and the capacity ofstructures to resist fhose action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood stmctures exposed to fire, because predicts the charring rate as a fünction offire exposure. The charring rate calculation ofmost stmctural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materiais.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Purpose. The ability to sense the position of limb segments is a highly specialised proprioceptive function important for control of movement. Abnormal knee proprioception has been found in association with several musculoskeletal pathologies but whether nociceptive Stimulation can produce these proprioceptive changes is unclear. This study evaluated the effect of experimentally induced knee pain on knee joint position sense (JPS) in healthy individuals. Study design. Repeated measures, within-subject design. Methods. Knee JPS was tested in 16 individuals with no history of knee pathology under three experimental conditions: baseline control, a distraction task and knee pain induced by injection of hypertonic saline into the infrapatellar fat pad. Knee JPS was measured using active ipsilateral limb matching responses at 20degrees and 60degrees flexion whilst non-weightbearing (NWB) and 20degrees flexion single leg stance. During the tasks, the subjective perception of distraction and severity of pain were measured using 11-point numerical rating scales. Results. Knee JPS was not altered by acute knee pain in any of the positions tested. The distraction task resulted in poorer concentration, greater JPS absolute errors at 20degrees NWB, and greater variability in errors during the WB tests. There were no significant correlations between levels of pain and changes in JPS errors. Changes in JPS with pain and distraction were inversely related to baseline knee JPS variable error in all test positions (r = -0.56 to -0.91) but less related to baseline absolute error. Conclusion. Knee JPS is reduced by an attention-demanding task but not by experimentally induced pain. (C) 2004 Orthopaedic Research Society. Published by Elsevier Ltd. All rights reserved.
Theory-of-mind development in oral deaf children with cochlear implants or conventional hearing aids
Resumo:
Background: In the context of the established finding that theory-of-mind (ToM) growth is seriously delayed in late-signing deaf children, and some evidence of equivalent delays in those learning speech with conventional hearing aids, this study's novel contribution was to explore ToM development in deaf children with cochlear implants. Implants can substantially boost auditory acuity and rates of language growth. Despite the implant, there are often problems socialising with hearing peers and some language difficulties, lending special theoretical interest to the present comparative design. Methods: A total of 52 children aged 4 to 12 years took a battery of false belief tests of ToM. There were 26 oral deaf children, half with implants and half with hearing aids, evenly divided between oral-only versus sign-plus-oral schools. Comparison groups of age-matched high-functioning children with autism and younger hearing children were also included. Results: No significant ToM differences emerged between deaf children with implants and those with hearing aids, nor between those in oral-only versus sign-plus-oral schools. Nor did the deaf children perform any better on the ToM tasks than their age peers with autism. Hearing preschoolers scored significantly higher than all other groups. For the deaf and the autistic children, as well as the preschoolers, rate of language development and verbal maturity significantly predicted variability in ToM, over and above chronological age. Conclusions: The finding that deaf children with cochlear implants are as delayed in ToM development as children with autism and their deaf peers with hearing aids or late sign language highlights the likely significance of peer interaction and early fluent communication with peers and family, whether in sign or in speech, in order to optimally facilitate the growth of social cognition and language.
Resumo:
Primary objective: To test whether people with cognitive-linguistic impairments following traumatic brain injury could learn to use the Internet using specialized training materials. Research design: Pre-post test design. Methods and procedures: Seven participants were each matched with a volunteer tutor. Basic Internet skills were taught over six lessons using a tutor's manual and a student manual. Instructions used simple text and graphics based on Microsoft Internet Explorer 5.5. Students underwent Internet skills assessments and interviews pre- and post-training. Tutors completed a post-training questionnaire. Main outcomes and results: Six of seven participants reached moderate-to-high degrees of independence. Literacy impairment was an expected training barrier; however, cognitive impairments affecting concentration, memory and motivation were more significant. Conclusions: Findings suggest that people with cognitive-linguistic impairments can learn Internet skills using specialized training materials. Participants and their carers also reported positive outcomes beyond the acquisition of Internet skills.
Resumo:
Primary objectives: (1) To investigate the Nonword Repetition test (NWR) as an index of sub-vocal rehearsal deficits after mild traumatic brain injury (mTBI); (2) to assess the reliability, validity and sensitivity of the NWR; and (3) to compare the NWR to more sensitive tests of verbal memory. Research design: An independent groups design. Methods and procedures: Study 1 administered the NWR to 46 mTBI and 61 uninjured controls with the Rapid Screen of Concussion (RSC). Study 2 compared mTBI, orthopaedic and uninjured participants on the NWR and the Hopkins Verbal Learning Test (HVLT-R). Main outcomes and results: The NWR did not improve the diagnostic accuracy of the RSC. However, it is reliable and indexes sub-vocal rehearsal speed. These findings provide evidence that although the current form of the NWR lacks sensitivity to the impact of mTBI, the development of a more sensitive test of sub-vocal rehearsal deficits following mTBI is warranted.
Resumo:
Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.