917 resultados para design methods and aids


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Salient pole brushless alternators coupled to IC engines are extensively used as stand-by power supply units for meeting in- dustrial power demands. Design of such generators demands high power to weight ratio, high e ciency and low cost per KVA out- put. Moreover, the performance characteristics of such machines like voltage regulation and short circuit ratio (SCR) are critical when these machines are put into parallel operation and alterna- tors for critical applications like defence and aerospace demand very low harmonic content in the output voltage. While designing such alternators, accurate prediction of machine characteristics, including total harmonic distortion (THD) is essential to mini- mize development cost and time. Total harmonic distortion in the output voltage of alternators should be as low as possible especially when powering very sophis- ticated and critical applications. The output voltage waveform of a practical AC generator is replica of the space distribution of the ux density in the air gap and several factors such as shape of the rotor pole face, core saturation, slotting and style of coil disposition make the realization of a sinusoidal air gap ux wave impossible. These ux harmonics introduce undesirable e ects on the alternator performance like high neutral current due to triplen harmonics, voltage distortion, noise, vibration, excessive heating and also extra losses resulting in poor e ciency, which in turn necessitate de-rating of the machine especially when connected to non-linear loads. As an important control unit of brushless alternator, the excitation system and its dynamic performance has a direct impact on alternator's stability and reliability. The thesis explores design and implementation of an excitation i system utilizing third harmonic ux in the air gap of brushless al- ternators, using an additional auxiliary winding, wound for 1=3rd pole pitch, embedded into the stator slots and electrically iso- lated from the main winding. In the third harmonic excitation system, the combined e ect of two auxiliary windings, one with 2=3rd pitch and another third harmonic winding with 1=3rd pitch, are used to ensure good voltage regulation without an electronic automatic voltage regulator (AVR) and also reduces the total harmonic content in the output voltage, cost e ectively. The design of the third harmonic winding by analytic methods demands accurate calculation of third harmonic ux density in the air gap of the machine. However, precise estimation of the amplitude of third harmonic ux in the air gap of a machine by conventional design procedures is di cult due to complex geome- try of the machine and non-linear characteristics of the magnetic materials. As such, prediction of the eld parameters by conven- tional design methods is unreliable and hence virtual prototyping of the machine is done to enable accurate design of the third har- monic excitation system. In the design and development cycle of electrical machines, it is recognized that the use of analytical and experimental methods followed by expensive and in exible prototyping is time consum- ing and no longer cost e ective. Due to advancements in com- putational capabilities over recent years, nite element method (FEM) based virtual prototyping has become an attractive al- ternative to well established semi-analytical and empirical design methods as well as to the still popular trial and error approach followed by the costly and time consuming prototyping. Hence, by virtually prototyping the alternator using FEM, the important performance characteristics of the machine are predicted. Design of third harmonic excitation system is done with the help of results obtained from virtual prototype of the machine. Third harmonic excitation (THE) system is implemented in a 45 KVA ii experimental machine and experiments are conducted to validate the simulation results. Simulation and experimental results show that by utilizing third harmonic ux in the air gap of the ma- chine for excitation purposes during loaded conditions, triplen harmonic content in the output phase voltage is signi cantly re- duced. The prototype machine with third harmonic excitation system designed and developed based on FEM analysis proved to be economical due to its simplicity and has the added advan- tage of reduced harmonics in the output phase voltage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book argues for novel strategies to integrate engineering design procedures and structural analysis data into architectural design. Algorithmic procedures that recently migrated into the architectural practice are utilized to improve the interface of both disciplines. Architectural design is predominately conducted as a negotiation process of various factors but often lacks rigor and data structures to link it to quantitative procedures. Numerical structural design on the other hand could act as a role model for handling data and robust optimization but it often lacks the complexity of architectural design. The goal of this research is to bring together robust methods from structural design and complex dependency networks from architectural design processes. The book presents three case studies of tools and methods that are developed to exemplify, analyze and evaluate a collaborative work flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The United Nation Intergovernmental Panel on Climate Change (IPCC) makes it clear that climate change is due to human activities and it recognises buildings as a distinct sector among the seven analysed in its 2007 Fourth Assessment Report. Global concerns have escalated regarding carbon emissions and sustainability in the built environment. The built environment is a human-made setting to accommodate human activities, including building and transport, which covers an interdisciplinary field addressing design, construction, operation and management. Specifically, Sustainable Buildings are expected to achieve high performance throughout the life-cycle of siting, design, construction, operation, maintenance and demolition, in the following areas: • energy and resource efficiency; • cost effectiveness; • minimisation of emissions that negatively impact global warming, indoor air quality and acid rain; • minimisation of waste discharges; and • maximisation of fulfilling the requirements of occupants’ health and wellbeing. Professionals in the built environment sector, for example, urban planners, architects, building scientists, engineers, facilities managers, performance assessors and policy makers, will play a significant role in delivering a sustainable built environment. Delivering a sustainable built environment needs an integrated approach and so it is essential for built environment professionals to have interdisciplinary knowledge in building design and management . Building and urban designers need to have a good understanding of the planning, design and management of the buildings in terms of low carbon and energy efficiency. There are a limited number of traditional engineers who know how to design environmental systems (services engineer) in great detail. Yet there is a very large market for technologists with multi-disciplinary skills who are able to identify the need for, envision and manage the deployment of a wide range of sustainable technologies, both passive (architectural) and active (engineering system),, and select the appropriate approach. Employers seek applicants with skills in analysis, decision-making/assessment, computer simulation and project implementation. An integrated approach is expected in practice, which encourages built environment professionals to think ‘out of the box’ and learn to analyse real problems using the most relevant approach, irrespective of discipline. The Design and Management of Sustainable Built Environment book aims to produce readers able to apply fundamental scientific research to solve real-world problems in the general area of sustainability in the built environment. The book contains twenty chapters covering climate change and sustainability, urban design and assessment (planning, travel systems, urban environment), urban management (drainage and waste), buildings (indoor environment, architectural design and renewable energy), simulation techniques (energy and airflow), management (end-user behaviour, facilities and information), assessment (materials and tools), procurement, and cases studies ( BRE Science Park). Chapters one and two present general global issues of climate change and sustainability in the built environment. Chapter one illustrates that applying the concepts of sustainability to the urban environment (buildings, infrastructure, transport) raises some key issues for tackling climate change, resource depletion and energy supply. Buildings, and the way we operate them, play a vital role in tackling global greenhouse gas emissions. Holistic thinking and an integrated approach in delivering a sustainable built environment is highlighted. Chapter two demonstrates the important role that buildings (their services and appliances) and building energy policies play in this area. Substantial investment is required to implement such policies, much of which will earn a good return. Chapters three and four discuss urban planning and transport. Chapter three stresses the importance of using modelling techniques at the early stage for strategic master-planning of a new development and a retrofit programme. A general framework for sustainable urban-scale master planning is introduced. This chapter also addressed the needs for the development of a more holistic and pragmatic view of how the built environment performs, , in order to produce tools to help design for a higher level of sustainability and, in particular, how people plan, design and use it. Chapter four discusses microcirculation, which is an emerging and challenging area which relates to changing travel behaviour in the quest for urban sustainability. The chapter outlines the main drivers for travel behaviour and choices, the workings of the transport system and its interaction with urban land use. It also covers the new approach to managing urban traffic to maximise economic, social and environmental benefits. Chapters five and six present topics related to urban microclimates including thermal and acoustic issues. Chapter five discusses urban microclimates and urban heat island, as well as the interrelationship of urban design (urban forms and textures) with energy consumption and urban thermal comfort. It introduces models that can be used to analyse microclimates for a careful and considered approach for planning sustainable cities. Chapter six discusses urban acoustics, focusing on urban noise evaluation and mitigation. Various prediction and simulation methods for sound propagation in micro-scale urban areas, as well as techniques for large scale urban noise-mapping, are presented. Chapters seven and eight discuss urban drainage and waste management. The growing demand for housing and commercial developments in the 21st century, as well as the environmental pressure caused by climate change, has increased the focus on sustainable urban drainage systems (SUDS). Chapter seven discusses the SUDS concept which is an integrated approach to surface water management. It takes into consideration quality, quantity and amenity aspects to provide a more pleasant habitat for people as well as increasing the biodiversity value of the local environment. Chapter eight discusses the main issues in urban waste management. It points out that population increases, land use pressures, technical and socio-economic influences have become inextricably interwoven and how ensuring a safe means of dealing with humanity’s waste becomes more challenging. Sustainable building design needs to consider healthy indoor environments, minimising energy for heating, cooling and lighting, and maximising the utilisation of renewable energy. Chapter nine considers how people respond to the physical environment and how that is used in the design of indoor environments. It considers environmental components such as thermal, acoustic, visual, air quality and vibration and their interaction and integration. Chapter ten introduces the concept of passive building design and its relevant strategies, including passive solar heating, shading, natural ventilation, daylighting and thermal mass, in order to minimise heating and cooling load as well as energy consumption for artificial lighting. Chapter eleven discusses the growing importance of integrating Renewable Energy Technologies (RETs) into buildings, the range of technologies currently available and what to consider during technology selection processes in order to minimise carbon emissions from burning fossil fuels. The chapter draws to a close by highlighting the issues concerning system design and the need for careful integration and management of RETs once installed; and for home owners and operators to understand the characteristics of the technology in their building. Computer simulation tools play a significant role in sustainable building design because, as the modern built environment design (building and systems) becomes more complex, it requires tools to assist in the design process. Chapter twelve gives an overview of the primary benefits and users of simulation programs, the role of simulation in the construction process and examines the validity and interpretation of simulation results. Chapter thirteen particularly focuses on the Computational Fluid Dynamics (CFD) simulation method used for optimisation and performance assessment of technologies and solutions for sustainable building design and its application through a series of cases studies. People and building performance are intimately linked. A better understanding of occupants’ interaction with the indoor environment is essential to building energy and facilities management. Chapter fourteen focuses on the issue of occupant behaviour; principally, its impact, and the influence of building performance on them. Chapter fifteen explores the discipline of facilities management and the contribution that this emerging profession makes to securing sustainable building performance. The chapter highlights a much greater diversity of opportunities in sustainable building design that extends well into the operational life. Chapter sixteen reviews the concepts of modelling information flows and the use of Building Information Modelling (BIM), describing these techniques and how these aspects of information management can help drive sustainability. An explanation is offered concerning why information management is the key to ‘life-cycle’ thinking in sustainable building and construction. Measurement of building performance and sustainability is a key issue in delivering a sustainable built environment. Chapter seventeen identifies the means by which construction materials can be evaluated with respect to their sustainability. It identifies the key issues that impact the sustainability of construction materials and the methodologies commonly used to assess them. Chapter eighteen focuses on the topics of green building assessment, green building materials, sustainable construction and operation. Commonly-used assessment tools such as BRE Environmental Assessment Method (BREEAM), Leadership in Energy and Environmental Design ( LEED) and others are introduced. Chapter nineteen discusses sustainable procurement which is one of the areas to have naturally emerged from the overall sustainable development agenda. It aims to ensure that current use of resources does not compromise the ability of future generations to meet their own needs. Chapter twenty is a best-practice exemplar - the BRE Innovation Park which features a number of demonstration buildings that have been built to the UK Government’s Code for Sustainable Homes. It showcases the very latest innovative methods of construction, and cutting edge technology for sustainable buildings. In summary, Design and Management of Sustainable Built Environment book is the result of co-operation and dedication of individual chapter authors. We hope readers benefit from gaining a broad interdisciplinary knowledge of design and management in the built environment in the context of sustainability. We believe that the knowledge and insights of our academics and professional colleagues from different institutions and disciplines illuminate a way of delivering sustainable built environment through holistic integrated design and management approaches. Last, but not least, I would like to take this opportunity to thank all the chapter authors for their contribution. I would like to thank David Lim for his assistance in the editorial work and proofreading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experiment was carried out to evaluate the performance, egg quality and morphometry of the reproductive tract, liver, pancreas and tongue of laying hens submitted to different molting methods. Two hundred and eighty eight 72-week-old Isa brown layers were distributed according to a completely randomized design with six treatments (molting methods) and six replicates of eight birds each. Layers were fed diets containing 3000 ppm zinc oxide, 60 ppm or 120 ppm nicarbazin, 30 ppm or 60 ppm salinomycin, or were submitted to feed fasting. Data were submitted to analysis of variance and means were compared by the test of Tukey at 5% probability level. Molting methods alternative to feed fasting were effective to induce molting in layer and provided good performance results in the second laying cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate the survival rate, success rate, load to fracture, and finite element analysis (FEA) of maxillary central incisors and canines restored using ceramic veneers and varying preparation designs.Methods and Materials: Thirty human maxillary central incisors and 30 canines were allocated to the following four groups (n=15) based on the preparation design and type of tooth: Gr1 = central incisor with a conservative preparation; Gr2 = central incisor with a conventional preparation with palatal chamfer; Gr3 = canine with a conservative preparation; Gr4 = canine with a conventional preparation with palatal chamfer. Ceramic veneers (lithium disilicate) were fabricated and adhesively cemented (Variolink Veneer). The specimens were subjected to 4 x 106 mechanical cycles and evaluated at every 500,000 cycles to detect failures. Specimens that survived were subjected to a load to fracture test. Bidimensional models were modeled (Rhinoceros 4.0) and evaluated (MSC.Patrans 2005r2 and MSC.Marc 2005r2) on the basis of their maximum principal stress (MPS) values. Survival rate values were analyzed using the Kaplan-Meier test (alpha = 0.05) and load to fracture values were analyzed using the Student t-test (alpha = 0.05).Results: All groups showed 100% survival rates. The Student t-test did not show any difference between the groups for load to fracture. FEA showed higher MPS values in the specimens restored using veneers with conventional preparation design with palatal chamfer.Conclusion: Preparation design did not affect the fracture load of canines and central incisors, but the veneers with conventional preparation design with palatal chamfer exhibited a tendency to generate higher MPS values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background An estimated 10–20 million individuals are infected with the retrovirus human T-cell leukemia virus type 1 (HTLV-1). While the majority of these individuals remain asymptomatic, 0.3-4% develop a neurodegenerative inflammatory disease, termed HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). HAM/TSP results in the progressive demyelination of the central nervous system and is a differential diagnosis of multiple sclerosis (MS). The etiology of HAM/TSP is unclear, but evidence points to a role for CNS-inflitrating T-cells in pathogenesis. Recently, the HTLV-1-Tax protein has been shown to induce transcription of the human endogenous retrovirus (HERV) families W, H and K. Intriguingly, numerous studies have implicated these same HERV families in MS, though this association remains controversial. Results Here, we explore the hypothesis that HTLV-1-infection results in the induction of HERV antigen expression and the elicitation of HERV-specific T-cells responses which, in turn, may be reactive against neurons and other tissues. PBMC from 15 HTLV-1-infected subjects, 5 of whom presented with HAM/TSP, were comprehensively screened for T-cell responses to overlapping peptides spanning HERV-K(HML-2) Gag and Env. In addition, we screened for responses to peptides derived from diverse HERV families, selected based on predicted binding to predicted optimal epitopes. We observed a lack of responses to each of these peptide sets. Conclusions Thus, although the limited scope of our screening prevents us from conclusively disproving our hypothesis, the current study does not provide data supporting a role for HERV-specific T-cell responses in HTLV-1 associated immunopathology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new Brazilian ABNT NBR 15575 Standard (the ―Standard‖) recommends two methods for analyzing housing thermal performance: a simplified and a computational simulation method. The aim of this paper is to evaluate both methods and the coherence between each. For this, the thermal performance of a low-cost single-family house was evaluated through the application of the procedures prescribed by the Standard. To accomplish this study, the EnergyPlus software was selected. Comparative analyses of the house with varying envelope U-values and solar absorptance of external walls were performed in order to evaluate the influence of these parameters on the results. The results have shown limitations in the current Standard computational simulation method, due to different aspects: weather files, lack of consideration of passive strategies, and inconsistency with the simplified method. Therefore, this research indicates that there are some aspects to be improved in this Standard, so it could better represent the real thermal performance of social housing in Brazil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity characterizing the present thesis was mainly centered on the design, development and validation of methodologies for the estimation of stationary and time-varying connectivity between different regions of the human brain during specific complex cognitive tasks. Such activity involved two main aspects: i) the development of a stable, consistent and reproducible procedure for functional connectivity estimation with a high impact on neuroscience field and ii) its application to real data from healthy volunteers eliciting specific cognitive processes (attention and memory). In particular the methodological issues addressed in the present thesis consisted in finding out an approach to be applied in neuroscience field able to: i) include all the cerebral sources in connectivity estimation process; ii) to accurately describe the temporal evolution of connectivity networks; iii) to assess the significance of connectivity patterns; iv) to consistently describe relevant properties of brain networks. The advancement provided in this thesis allowed finding out quantifiable descriptors of cognitive processes during a high resolution EEG experiment involving subjects performing complex cognitive tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Most adults infected with HIV achieve viral suppression within a year of starting combination antiretroviral therapy (cART). It is important to understand the risk of AIDS events or death for patients with a suppressed viral load. Methods and Findings Using data from the Collaboration of Observational HIV Epidemiological Research Europe (2010 merger), we assessed the risk of a new AIDS-defining event or death in successfully treated patients. We accumulated episodes of viral suppression for each patient while on cART, each episode beginning with the second of two consecutive plasma viral load measurements <50 copies/µl and ending with either a measurement >500 copies/µl, the first of two consecutive measurements between 50–500 copies/µl, cART interruption or administrative censoring. We used stratified multivariate Cox models to estimate the association between time updated CD4 cell count and a new AIDS event or death or death alone. 75,336 patients contributed 104,265 suppression episodes and were suppressed while on cART for a median 2.7 years. The mortality rate was 4.8 per 1,000 years of viral suppression. A higher CD4 cell count was always associated with a reduced risk of a new AIDS event or death; with a hazard ratio per 100 cells/µl (95% CI) of: 0.35 (0.30–0.40) for counts <200 cells/µl, 0.81 (0.71–0.92) for counts 200 to <350 cells/µl, 0.74 (0.66–0.83) for counts 350 to <500 cells/µl, and 0.96 (0.92–0.99) for counts ≥500 cells/µl. A higher CD4 cell count became even more beneficial over time for patients with CD4 cell counts <200 cells/µl. Conclusions Despite the low mortality rate, the risk of a new AIDS event or death follows a CD4 cell count gradient in patients with viral suppression. A higher CD4 cell count was associated with the greatest benefit for patients with a CD4 cell count <200 cells/µl but still some slight benefit for those with a CD4 cell count ≥500 cells/µl.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To compare regimens consisting of either efavirenz or nevirapine and two or more nucleoside reverse transcriptase inhibitors (NRTIs) among HIV-infected, antiretroviral-naive, and AIDS-free individuals with respect to clinical, immunologic, and virologic outcomes. DESIGN: Prospective studies of HIV-infected individuals in Europe and the US included in the HIV-CAUSAL Collaboration. METHODS: Antiretroviral therapy-naive and AIDS-free individuals were followed from the time they started an NRTI, efavirenz or nevirapine, classified as following one or both types of regimens at baseline, and censored when they started an ineligible drug or at 6 months if their regimen was not yet complete. We estimated the 'intention-to-treat' effect for nevirapine versus efavirenz regimens on clinical, immunologic, and virologic outcomes. Our models included baseline covariates and adjusted for potential bias introduced by censoring via inverse probability weighting. RESULTS: A total of 15 336 individuals initiated an efavirenz regimen (274 deaths, 774 AIDS-defining illnesses) and 8129 individuals initiated a nevirapine regimen (203 deaths, 441 AIDS-defining illnesses). The intention-to-treat hazard ratios [95% confidence interval (CI)] for nevirapine versus efavirenz regimens were 1.59 (1.27, 1.98) for death and 1.28 (1.09, 1.50) for AIDS-defining illness. Individuals on nevirapine regimens experienced a smaller 12-month increase in CD4 cell count by 11.49 cells/mul and were 52% more likely to have virologic failure at 12 months as those on efavirenz regimens. CONCLUSIONS: Our intention-to-treat estimates are consistent with a lower mortality, a lower incidence of AIDS-defining illness, a larger 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for efavirenz compared with nevirapine.