971 resultados para systems - evolution
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
One of the main unresolved questions in science is how non-living matter became alive in a process known as abiognesis, which aims to explain how from a primordial soup scenario containing simple molecules, by following a ``bottom up'' approach, complex biomolecules emerged forming the first living system, known as a protocell. A protocell is defined by the interplay of three sub-systems which are considered requirements for life: information molecules, metabolism, and compartmentalization. This thesis investigates the role of compartmentalization during the emergence of life, and how simple membrane aggregates could evolve into entities that were able to develop ``life-like'' behaviours, and in particular how such evolution could happen without the presence of information molecules. Our ultimate objective is to create an autonomous evolvable system, and in order tp do so we will try to engineer life following a ``top-down'' approach, where an initial platform capable of evolving chemistry will be constructed, but the chemistry being dependent on the robotic adjunct, and how then this platform can be de-constructed in iterative operations until it is fully disconnected from the evolvable system, the system then being inherently autonomous. The first project of this thesis describes how the initial platform was designed and built. The platform was based on the model of a standard liquid handling robot, with the main difference with respect to other similar robots being that we used a 3D-printer in order to prototype the robot and build its main equipment, like a liquid dispensing system, tool movement mechanism, and washing procedures. The robot was able to mix different components and create populations of droplets in a Petri dish filled with aqueous phase. The Petri dish was then observed by a camera, which analysed the behaviours described by the droplets and fed this information back to the robot. Using this loop, the robot was then able to implement an evolutionary algorithm, where populations of droplets were evolved towards defined life-like behaviours. The second project of this thesis aimed to remove as many mechanical parts as possible from the robot while keeping the evolvable chemistry intact. In order to do so, we encapsulated the functionalities of the previous liquid handling robot into a single monolithic 3D-printed device. This device was able to mix different components, generate populations of droplets in an aqueous phase, and was also equipped with a camera in order to analyse the experiments. Moreover, because the full fabrication process of the devices happened in a 3D-printer, we were also able to alter its experimental arena by adding different obstacles where to evolve the droplets, enabling us to study how environmental changes can shape evolution. By doing so, we were able to embody evolutionary characteristics into our device, removing constraints from the physical platform, and taking one step forward to a possible autonomous evolvable system.
Resumo:
Self-replication and compartmentalization are two central properties thought to be essential for minimal life, and understanding how such processes interact in the emergence of complex reaction networks is crucial to exploring the development of complexity in chemistry and biology. Autocatalysis can emerge from multiple different mechanisms such as formation of an initiator, template self-replication and physical autocatalysis (where micelles formed from the reaction product solubilize the reactants, leading to higher local concentrations and therefore higher rates). Amphiphiles are also used in artificial life studies to create protocell models such as micelles, vesicles and oil-in-water droplets, and can increase reaction rates by encapsulation of reactants. So far, no template self-replicator exists which is capable of compartmentalization, or transferring this molecular scale phenomenon to micro or macro-scale assemblies. Here a system is demonstrated where an amphiphilic imine catalyses its own formation by joining a non-polar alkyl tail group with a polar carboxylic acid head group to form a template, which was shown to form reverse micelles by Dynamic Light Scattering (DLS). The kinetics of this system were investigated by 1H NMR spectroscopy, showing clearly that a template self-replication mechanism operates, though there was no evidence that the reverse micelles participated in physical autocatalysis. Active oil droplets, composed from a mixture of insoluble organic compounds in an aqueous sub-phase, can undergo processes such as division, self-propulsion and chemotaxis, and are studied as models for minimal cells, or protocells. Although in most cases the Marangoni effect is responsible for the forces on the droplet, the behaviour of the droplet depends heavily on the exact composition. Though theoretical models are able to calculate the forces on a droplet, to model a mixture of oils on an aqueous surface where compounds from the oil phase are dissolving and diffusing through the aqueous phase is beyond current computational capability. The behaviour of a droplet in an aqueous phase can only be discovered through experiment, though it is determined by the droplet's composition. By using an evolutionary algorithm and a liquid handling robot to conduct droplet experiments and decide which compositions to test next, entirely autonomously, the composition of the droplet becomes a chemical genome capable of evolution. The selection is carried out according to a fitness function, which ranks the formulation based on how well it conforms to the chosen fitness criteria (e.g. movement or division). Over successive generations, significant increases in fitness are achieved, and this increase is higher with more components (i.e. greater complexity). Other chemical processes such as chemiluminescence and gelation were investigated in active oil droplets, demonstrating the possibility of controlling chemical reactions by selective droplet fusion. Potential future applications for this might include combinatorial chemistry, or additional fitness goals for the genetic algorithm. Combining the self-replication and the droplet protocells research, it was demonstrated that the presence of the amphiphilic replicator lowers the interfacial tension between droplets of a reaction mixture in organic solution and the alkaline aqueous phase, causing them to divide. Periodic sampling by a liquid handling robot revealed that the extent of droplet fission increased as the reaction progressed, producing more individual protocells with increased self-replication. This demonstrates coupling of the molecular scale phenomenon of template self-replication to a macroscale physicochemical effect.
Resumo:
This thesis introduce a new innovation methodology called IDEAS(R)EVOLUTION that was developed according to an on-going experimental research project started in 2007. This new approach to innovation has initial based on Design thinking for innovation theory and practice. The concept of design thinking for innovation has received much attention in recent years. This innovation approach has climbed from the design and designers knowledge field towards other knowledge areas, mainly business management and marketing. Human centered approach, radical collaboration, creativity and breakthrough thinking are the main founding principles of Design thinking that were adapted by those knowledge areas due to their assertively and fitness to the business context and market complexity evolution. Also Open innovation, User-centered innovation and later on Living Labs models emerge as answers to the market and consumers pressure and desire for new products, new services or new business models. Innovation became the principal business management focus and strategic orientation. All this changes had an impact also in the marketing theory. It is possible now to have better strategies, communications plans and continuous dialogue systems with the target audience, incorporating their insights and promoting them to the main dissemination ambassadors of our innovations in the market. Drawing upon data from five case studies, the empirical findings in this dissertation suggest that companies need to shift from Design thinking for innovation approach to an holistic, multidimensional and integrated innovation system. The innovation context it is complex, companies need deeper systems then the success formulas that “commercial “Design thinking for innovation “preaches”. They need to learn how to change their organization culture, how to empower their workforce and collaborators, how to incorporate external stakeholders in their innovation processes, hoe to measure and create key performance indicators throughout the innovation process to give them better decision making data, how to integrate meaning and purpose in their innovation philosophy. Finally they need to understand that the strategic innovation effort it is not a “one shot” story it is about creating a continuous flow of interaction and dialogue with their clients within a “value creation chain“ mindset; RESUMO: Metodologia de co-criação de um produto/marca cruzando Marketing, Design Thinking, Criativity and Management - IDEAS(R)EVOLUTION. Esta dissertação apresenta uma nova metodologia de inovação chamada IDEAS(R)EVOLUTION, que foi desenvolvida segundo um projecto de investigação experimental contínuo que teve o seu início em 2007. Esta nova abordagem baseou-se, inicialmente, na teoria e na práctica do Design thinking para a inovação. Actualmente o conceito do Design Thinking para a inovação “saiu” do dominio da area de conhecimento do Design e dos Designers, tendo despertado muito interesse noutras áreas como a Gestão e o Marketing. Uma abordagem centrada na Pessoa, a colaboração radical, a criatividade e o pensamento disruptivo são principios fundadores do movimento do Design thinking que têm sido adaptados por essas novas áreas de conhecimento devido assertividade e adaptabilidade ao contexto dos negócios e à evolução e complexidade do Mercado. Também os modelos de Inovação Aberta, a inovação centrada no utilizador e mais tarde os Living Labs, emergem como possiveis soluções para o Mercado e para a pressão e desejo dos consumidores para novos productos, serviços ou modelos de negócio. A inovação passou a ser o principal foco e orientação estratégica na Gestão. Todas estas mudanças também tiveram impacto na teoria do Marketing. Hoje é possivel criar melhores estratégias, planos de comunicação e sistemas continuos de diálogo com o público alvo, incorporando os seus insights e promovendo os consumidores como embaixadores na disseminação da inovação das empresas no Mercado Os resultados empiricos desta tese, construídos com a informação obtida nos cinco casos realizados, sugerem que as empresas precisam de se re-orientar do paradigma do Design thinking para a inovação, para um sistema de inovação mais holistico, multidimensional e integrado. O contexto da Inovação é complexo, por isso as empresas precisam de sistemas mais profundos e não apenas de “fórmulas comerciais” como o Design thinking para a inovação advoga. As Empresas precisam de aprender como mudar a sua cultura organizacional, como capacitar sua força de trabalho e colaboradores, como incorporar os públicos externos no processo de inovação, como medir o processo de inovação criando indicadores chave de performance e obter dados para um tomada de decisão mais informada, como integrar significado e propósito na sua filosofia de inovação. Por fim, precisam de perceber que uma estratégia de inovação não passa por ter “sucesso uma vez”, mas sim por criar um fluxo contínuo de interação e diálogo com os seus clientes com uma mentalidade de “cadeia de criação de valor”
Resumo:
Oxygen Reduction Reaction (ORR) requires a platinum-based catalyst to reduce the activation barrier. One of the most promising materials as alternative catalysts are carbon-based, graphene and carbon nanotubes (CNT) derivatives. ORR on a carbon-based substrate involves the less efficient two electrons process and the optimal four electrons process. New synthetic strategies to produce tunable graphene-based materials utilizing graphene oxide (GO) as a base inspired the first part of this work. Hydrogen Evolution Reaction (HER) is a slow process requiring also platinum or palladium as catalyst. In the second part of this work, we develop and use a technique for Ni nanoparticles electrodeposition using NiCl2 as precursor in the presence of ascorbate ligands. Electrodeposition of nano-nickel onto flat glassy carbon (GC) and onto nitrogen-doped reduced graphene oxide (rGO-N) substrates are studied. State of the art catalysts for CO2RR requires rare metals rhenium or rhodium. In recent years significant research has been done on non-noble metals and molecular systems to use as electro and photo-catalysts (artificial photosynthesis). As Cu-Zn alloys show good CO2RR performance, here we applied the same nanoparticle electrosynthesis technique using as precursors CuCl2 and Cl2Zn and observed successful formation of the nanoparticles and a notable activity in presence of CO2. Using rhenium complexes as catalysts is another popular approach and di-nuclear complexes have a positive cooperative effect. More recently a growing family of pre-catalysts based on the earth-abundant metal manganese, has emerged as a promising, cheaper alternative. Here we study the cooperative effects of di-nuclear manganese complexes derivatives when used as homogeneous electrocatalysts, as well as a rhenium functionalized polymer used as heterogeneous electrocatalyst.
Resumo:
The trend related to the turnover of internal combustion engine vehicles with EVs goes by the name of electrification. The push electrification experienced in the last decade is linked to the still ongoing evolution in power electronics technology for charging systems. This is the reason why an evolution in testing strategies and testing equipment is crucial too. The project this dissertation is based on concerns the investigation of a new EV simulator design. that optimizes the structure of the testing equipment used by the company who commissioned this work. Project requirements can be summarized in the following two points: space occupation reduction and parallel charging implementation. Some components were completely redesigned, and others were substituted with equivalent ones that could perform the same tasks. In this way it was possible to reduce the space occupation of the simulator, as well as to increase the efficiency of the testing device. Moreover, the possibility of conjugating different charging simulations could be investigated by parallelly launching two testing procedures on a unique machine, properly predisposed for supporting the two charging protocols used. On the back of the results achieved in the body of this dissertation, a new design for the EV simulator was proposed. In this way, space reduction was obtained, and space occupation efficiency was improved with the proposed new design. The testing device thus resulted to be way more compact, enabling to gain in safety and productivity, along with a 25% cost reduction. Furthermore, parallel charging was implemented in the proposed new design since the conducted tests clearly showed the feasibility of parallel charging sessions. The results presented in this work can thus be implemented to build the first prototype of the new EV simulator.
Resumo:
In the last few years, mobile wireless technology has gone through a revolutionary change. Web-enabled devices have evolved into essential tools for communication, information, and entertainment. The fifth generation (5G) of mobile communication networks is envisioned to be a key enabler of the next upcoming wireless revolution. Millimeter wave (mmWave) spectrum and the evolution of Cloud Radio Access Networks (C-RANs) are two of the main technological innovations of 5G wireless systems and beyond. Because of the current spectrum-shortage condition, mmWaves have been proposed for the next generation systems, providing larger bandwidths and higher data rates. Consequently, new radio channel models are being developed. Recently, deterministic ray-based models such as Ray-Tracing (RT) are getting more attractive thanks to their frequency-agility and reliable predictions. A modern RT software has been calibrated and used to analyze the mmWave channel. Knowledge of the electromagnetic properties of materials is therefore essential. Hence, an item-level electromagnetic characterization of common construction materials has been successfully achieved to obtain information about their complex relative permittivity. A complete tuning of the RT tool has been performed against indoor and outdoor measurement campaigns at 27 and 38 GHz, setting the basis for the future development of advanced beamforming techniques which rely on deterministic propagation models (as RT). C-RAN is a novel mobile network architecture which can address a number of challenges that network operators are facing in order to meet the continuous customers’ demands. C-RANs have already been adopted in advanced 4G deployments; however, there are still some issues to deal with, especially considering the bandwidth requirements set by the forthcoming 5G systems. Open RAN specifications have been proposed to overcome the new 5G challenges set on C-RAN architectures, including synchronization aspects. In this work it is described an FPGA implementation of the Synchronization Plane for an O-RAN-compliant radio system.
Resumo:
This thesis work has been motivated by an internal benchmark dealing with the output regulation problem of a nonlinear non-minimum phase system in the case of full-state feedback. The system under consideration structurally suffers from finite escape time, and this condition makes the output regulation problem very hard even for very simple steady-state evolution or exosystem dynamics, such as a simple integrator. This situation leads to studying the approaches developed for controlling Non-minimum phase systems and how they affect feedback performances. Despite a lot of frequency domain results, only a few works have been proposed for describing the performance limitations in a state space system representation. In particular, in our opinion, the most relevant research thread exploits the so-called Inner-Outer Decomposition. Such decomposition allows splitting the Non-minimum phase system under consideration into a cascade of two subsystems: a minimum phase system (the outer) that contains all poles of the original system and an all-pass Non-minimum phase system (the inner) that contains all the unavoidable pathologies of the unstable zero dynamics. Such a cascade decomposition was inspiring to start working on functional observers for linear and nonlinear systems. In particular, the idea of a functional observer is to exploit only the measured signals from the system to asymptotically reconstruct a certain function of the system states, without necessarily reconstructing the whole state vector. The feature of asymptotically reconstructing a certain state functional plays an important role in the design of a feedback controller able to stabilize the Non-minimum phase system.
Resumo:
An essential role in the global energy transition is attributed to Electric Vehicles (EVs) the energy for EV traction can be generated by renewable energy sources (RES), also at a local level through distributed power plants, such as photovoltaic (PV) systems. However, EV integration with electrical systems might not be straightforward. The intermittent RES, combined with the high and uncontrolled aggregate EV charging, require an evolution toward new planning and paradigms of energy systems. In this context, this work aims to provide a practical solution for EV charging integration in electrical systems with RES. A method for predicting the power required by an EV fleet at the charging hub (CH) is developed in this thesis. The proposed forecasting method considers the main parameters on which charging demand depends. The results of the EV charging forecasting method are deeply analyzed under different scenarios. To reduce the EV load intermittency, methods for managing the charging power of EVs are proposed. The main target was to provide Charging Management Systems (CMS) that modulate EV charging to optimize specific performance indicators such as system self-consumption, peak load reduction, and PV exploitation. Controlling the EV charging power to achieve specific optimization goals is also known as Smart Charging (SC). The proposed techniques are applied to real-world scenarios demonstrating performance improvements in using SC strategies. A viable alternative to maximize integration with intermittent RES generation is the integration of energy storage. Battery Energy Storage Systems (BESS) may be a buffer between peak load and RES production. A sizing algorithm for PV+BESS integration in EV charging hubs is provided. The sizing optimization aims to optimize the system's energy and economic performance. The results provide an overview of the optimal size that the PV+BESS plant should have to improve whole system performance in different scenarios.
Resumo:
This thesis presents a study of globular clusters (GCs), based on analysis of Monte Carlo simulations of globular clusters (GCs) with the aim to define new empirical parameters measurable from observations and able to trace the different phases of their dynamical evolution history. During their long term dynamical evolution, due to mass segregation and and dynamical friction, massive stars transfer kinetic energy to lower-mass objects, causing them to sink toward the cluster center. This continuous transfer of kinetic energy from the core to the outskirts triggers the runaway contraction of the core, known as "core collapse" (CC), followed by episodes of expansion and contraction called gravothermal oscillations. Clearly, such an internal dynamical evolution corresponds to significant variations also of the structure of the system. Determining the dynamical age of a cluster can be challenging as it depends on various internal and external properties. The traditional classification of GCs as CC or post-CC systems relies on detecting a steep power-law cusp in the central density profile, which may not always be reliable due to post-CC oscillations or other processes. In this thesis, based on the normalized cumulative radial distribution (nCRD) within a fraction of the half-mass radius is analyzed, and three diagnostics (A5, P5, and S2.5) are defined. These diagnostics show sensitivity to dynamical evolution and can distinguish pre-CC clusters from post-CC clusters.The analysis performed using multiple simulations with different initial conditions, including varying binary fractions and the presence of dark remnants showed the time variations of the diagnostics follow distinct patterns depending on the binary fraction and the retention or ejection of black holes. This analysis is extended to a larger set of simulations matching the observed properties of Galactic GCs, and the parameters show a potential to distinguish the dynamical stages of the observed clusters as well.
Resumo:
My Ph.D. thesis was dedicated to the exploration of different paths to convert sunlight into the shape of chemical bonds, by the formation of solar fuels. During the past three years, I have focused my research on two of these, namely molecular hydrogen H2 and the reduced nicotinamide adenine dinucleotide enzyme cofactor NAD(P)H. The first could become the ideal energy carrier for a truly clean energy system; it currently represents the best chance to liberate humanity from its dependence on fossil fuels. To address this, I studied different systems which can achieve proton reduction upon light absorption. More specifically, part of my work was aimed to the development of a cost-effective and stable catalyst in combination with a well-known photochemical cycle. To this extent, I worked on transition metal oxides which, as demonstrated in this work, have been identified as promising H2 evolution catalysts, showing excellent activity, stability, and previously unreported versatility. Another branch of my work on hydrogen production dealt with the use of a new class of polymeric semiconductor materials to absorb light and convert it into H2. The second solar fuel mentioned above is a key component of the most powerful methods for chemical synthesis: enzyme catalysis. The high cost of the reduced forms prohibits large-scale utilization, so artificial photosynthetic approaches for regenerating it are being intensively studied. The first system I developed exploits the tremendous reducing properties of a scarcely known ruthenium complex which is able to reduce NAD+. Lastly, I sought to revert the classical role of the sacrificial electron donor to an active component of the system and, to boost the process, I build up an autonomous microfluidic system able to generate highly reproducible NAD(P)H amount, demonstrating the superior performance of microfluidic reactors over batch and representing another successful photochemical NAD(P)H regeneration system.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
This study aimed at evaluating whether human papillomavirus (HPV) groups and E6/E7 mRNA of HPV 16, 18, 31, 33, and 45 are prognostic of cervical intraepithelial neoplasia (CIN) 2 outcome in women with a cervical smear showing a low-grade squamous intraepithelial lesion (LSIL). This cohort study included women with biopsy-confirmed CIN 2 who were followed up for 12 months, with cervical smear and colposcopy performed every three months. Women with a negative or low-risk HPV status showed 100% CIN 2 regression. The CIN 2 regression rates at the 12-month follow-up were 69.4% for women with alpha-9 HPV versus 91.7% for other HPV species or HPV-negative status (P < 0.05). For women with HPV 16, the CIN 2 regression rate at the 12-month follow-up was 61.4% versus 89.5% for other HPV types or HPV-negative status (P < 0.05). The CIN 2 regression rate was 68.3% for women who tested positive for HPV E6/E7 mRNA versus 82.0% for the negative results, but this difference was not statistically significant. The expectant management for women with biopsy-confirmed CIN 2 and previous cytological tests showing LSIL exhibited a very high rate of spontaneous regression. HPV 16 is associated with a higher CIN 2 progression rate than other HPV infections. HPV E6/E7 mRNA is not a prognostic marker of the CIN 2 clinical outcome, although this analysis cannot be considered conclusive. Given the small sample size, this study could be considered a pilot for future larger studies on the role of predictive markers of CIN 2 evolution.
Resumo:
The development and maintenance of the sealing of the root canal system is the key to the success of root canal treatment. The resin-based adhesive material has the potential to reduce the microleakage of the root canal because of its adhesive properties and penetration into dentinal walls. Moreover, the irrigation protocols may have an influence on the adhesiveness of resin-based sealers to root dentin. The objective of the present study was to evaluate the effect of different irrigant protocols on coronal bacterial microleakage of gutta-percha/AH Plus and Resilon/Real Seal Self-etch systems. One hundred ninety pre-molars were used. The teeth were divided into 18 experimental groups according to the irrigation protocols and filling materials used. The protocols used were: distilled water; sodium hypochlorite (NaOCl)+eDTA; NaOCl+H3PO4; NaOCl+eDTA+chlorhexidine (CHX); NaOCl+H3PO4+CHX; CHX+eDTA; CHX+ H3PO4; CHX+eDTA+CHX and CHX+H3PO4+CHX. Gutta-percha/AH Plus or Resilon/Real Seal Se were used as root-filling materials. The coronal microleakage was evaluated for 90 days against Enterococcus faecalis. Data were statistically analyzed using Kaplan-Meier survival test, Kruskal-Wallis and Mann-Whitney tests. No significant difference was verified in the groups using chlorhexidine or sodium hypochlorite during the chemo-mechanical preparation followed by eDTA or phosphoric acid for smear layer removal. The same results were found for filling materials. However, the statistical analyses revealed that a final flush with 2% chlorhexidine reduced significantly the coronal microleakage. A final flush with 2% chlorhexidine after smear layer removal reduces coronal microleakage of teeth filled with gutta-percha/AH Plus or Resilon/Real Seal SE.
Resumo:
To analyze the effects of treatment approach on the outcomes of newborns (birth weight [BW] < 1,000 g) with patent ductus arteriosus (PDA), from the Brazilian Neonatal Research Network (BNRN) on: death, bronchopulmonary dysplasia (BPD), severe intraventricular hemorrhage (IVH III/IV), retinopathy of prematurity requiring surgical (ROPsur), necrotizing enterocolitis requiring surgery (NECsur), and death/BPD. This was a multicentric, cohort study, retrospective data collection, including newborns (BW < 1000 g) with gestational age (GA) < 33 weeks and echocardiographic diagnosis of PDA, from 16 neonatal units of the BNRN from January 1, 2010 to Dec 31, 2011. Newborns who died or were transferred until the third day of life, and those with presence of congenital malformation or infection were excluded. Groups: G1 - conservative approach (without treatment), G2 - pharmacologic (indomethacin or ibuprofen), G3 - surgical ligation (independent of previous treatment). Factors analyzed: antenatal corticosteroid, cesarean section, BW, GA, 5 min. Apgar score < 4, male gender, Score for Neonatal Acute Physiology Perinatal Extension (SNAPPE II), respiratory distress syndrome (RDS), late sepsis (LS), mechanical ventilation (MV), surfactant (< 2 h of life), and time of MV. death, O2 dependence at 36 weeks (BPD36wks), IVH III/IV, ROPsur, NECsur, and death/BPD36wks. Student's t-test, chi-squared test, or Fisher's exact test; Odds ratio (95% CI); logistic binary regression and backward stepwise multiple regression. Software: MedCalc (Medical Calculator) software, version 12.1.4.0. p-values < 0.05 were considered statistically significant. 1,097 newborns were selected and 494 newborns were included: G1 - 187 (37.8%), G2 - 205 (41.5%), and G3 - 102 (20.6%). The highest mortality was observed in G1 (51.3%) and the lowest in G3 (14.7%). The highest frequencies of BPD36wks (70.6%) and ROPsur were observed in G3 (23.5%). The lowest occurrence of death/BPD36wks occurred in G2 (58.0%). Pharmacological (OR 0.29; 95% CI: 0.14-0.62) and conservative (OR 0.34; 95% CI: 0.14-0.79) treatments were protective for the outcome death/BPD36wks. The conservative approach of PDA was associated to high mortality, the surgical approach to the occurrence of BPD36wks and ROPsur, and the pharmacological treatment was protective for the outcome death/BPD36wks.