907 resultados para Cointegration analysis with structural breaks
Resumo:
Aim: Nowadays, research on orthopedic and dental implants is focused on titanium alloys for their mechanical properties and corrosion resistance in the human body environment. Another important aspect to be investigated is their surface topography, which is very important to osseointegration. With laser beam irradiation for roughening the implants surface an easier control of the microtopography is achieved, and surface contamination is avoided. The aim of this study was to assess human bone marrow stem cells response to a newly developed titanium alloy, Ti-15Mo, with surface topography modified by laser beam irradiation. Materials and methods: A total of 10 Ti machined disks (control), 10 Ti-15Mo machined disks and 10 Ti-15Mo disks treated by laser beam-irradiation were prepared. To study how Ti-15Mo surface topografy can induce osteoblast differentiation in mesenchymal stem cells, the expression levels of bone related genes and mesenchymal stem cells marker were analyzed, using real time Reverse Transcription-Polymerase Chain Reaction. Results: In Test 1 (comparison between Ti-15Mo machined disks and Ti-machined disks) quantitative real-time RT-PCR showed a significant induction of ALPL, FOSL1 and SPP1, which increase 20% or more. In Test 2 (comparison between Ti-15Mo laser treated disks and Ti-machined disks) all investigated genes were up-regulated. By comparing Test 1 and Test 2 it was detected that COL1A1, COL3A1, FOSL1 and ENG sensibly increased their expression whereas RUNX2, ALPL and SPP1 expression remained substantially unchanged. Conclusion: The present study demonstrated that laser treated Ti-15Mo alloys are promising materials for implants application.
Resumo:
The aim of this study was to evaluate stress distribution of the peri-implant bone by simulating the biomechanical influence of implants with different diameters of regular or platform switched connections by means of 3-dimensional finite element analysis. Five mathematical models of an implant-supported central incisor were created by varying the diameter (5.5 and 4.5 mm, internal hexagon) and abutment platform (regular and platform switched). For the cortical bone, the highest stress values (rmax and rvm) were observed in situation R1, followed by situations S1, R2, S3, and S2. For the trabecular bone, the highest stress values (rmax) were observed in situation S3, followed by situations R1, S1, R2, and S2. The influence of platform switching was more evident for cortical bone than for trabecular bone and was mainly seen in large platform diameter reduction.
Resumo:
ABSTRACT: The present work uses multivariate statistical analysis as a form of establishing the main sources of error in the Quantitative Phase Analysis (QPA) using the Rietveld method. The quantitative determination of crystalline phases using x ray powder diffraction is a complex measurement process whose results are influenced by several factors. Ternary mixtures of Al2O3, MgO and NiO were prepared under controlled conditions and the diffractions were obtained using the Bragg-Brentano geometric arrangement. It was possible to establish four sources of critical variations: the experimental absorption and the scale factor of NiO, which is the phase with the greatest linear absorption coefficient of the ternary mixture; the instrumental characteristics represented by mechanical errors of the goniometer and sample displacement; the other two phases (Al2O3 and MgO); and the temperature and relative humidity of the air in the laboratory. The error sources excessively impair the QPA with the Rietveld method. Therefore it becomes necessary to control them during the measurement procedure.
Resumo:
This paper presents the application of artificial immune systems for analysis of the structural integrity of a building. Inspired by a biological process, it uses the negative selection algorithm to perform the identification and characterization of structural failure. This paper presents the application of artificial immune systems for analysis of the structural integrity of a building. Inspired by a biological process, it uses the negative selection algorithm to perform the identification and characterization of structural failure. This methodology can assist professionals in the inspection of mechanical and civil structures, to identify and characterize flaws, in order to perform preventative maintenance to ensure the integrity of the structure and decision-making. In order to evaluate the methodology was made modeling a two-story building and several situations were simulated (base-line condition and improper conditions), yielding a database of signs, which were used as input data for the negative selection algorithm. The results obtained by the present method efficiency, robustness and accuracy.
Resumo:
This paper presents the application of artificial neural networks in the analysis of the structural integrity of a building. The main objective is to apply an artificial neural network based on adaptive resonance theory, called ARTMAP-Fuzzy neural network and apply it to the identification and characterization of structural failure. This methodology can help professionals in the inspection of structures, to identify and characterize flaws in order to conduct preventative maintenance to ensure the integrity of the structure and decision-making. In order to validate the methodology was modeled a building of two walk, and from this model were simulated various situations (base-line condition and improper conditions), resulting in a database of signs, which were used as input data for ARTMAP-Fuzzy network. The results show efficiency, robustness and accuracy.
Resumo:
The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p < 0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p > 0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p <0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p < 0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Results of the analysis of dynamic behavior of flashover phenomenon on the high voltage-polluted insulators are presented. These results were taken from a mathematical and an experimental model that introduce the variable thickness influence of the layer pollution deposited on the high-voltage insulator surface. Analysis of the flashover was done by way of introducing a variation in the thickness of the channel of Obenaus' model, simulating a layer pollution of variable thickness. The objective was to obtain a better reproduction of the real layer pollution deposited on the insulator that works in the polluted regions. Two types of thickness variations were used: a sudden variation, using a step; and a soft variation, using a ramp; that were put along the way of the discharge. Comparison between the mathematical and experimental models showed that introduction of a ramp makes Obenaus' model more efficient in analyzing behavior of flashover phenomenon.
Resumo:
The common practice in industry is to perform flutter analyses considering the generalized stiffness and mass matrices obtained from finite element method (FEM) and aerodynamic generalized force matrices obtained from a panel method, as the doublet lattice method. These analyses are often reperformed if significant differences are found in structural frequencies and damping ratios determined from ground vibration tests compared to FEM. This unavoidable rework can result in a lengthy and costly process of analysis during the aircraft development. In this context, this paper presents an approach to perform flutter analysis including uncertainties in natural frequencies and damping ratios. The main goal is to assure the nominal system’s stability considering these modal parameters varying in a limited range. The aeroelastic system is written as an affine parameter model and the robust stability is verified solving a Lyapunov function through linear matrix inequalities and convex optimization
Resumo:
Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.
Resumo:
Terbinafine hydrochloride (TerbHCl) is an allylamine derivative with fungicidal action, especially against dermatophytes. Different analytical methods have been reported for quantifying TerbHCl in different samples. These procedures require time-consuming sample preparation or expensive instrumentation. In this paper, electrochemical methods involving capillary electrophoresis with contactless conductivity detection, and amperometry associated with batch injection analysis, are described for the determination of TerbHCl in pharmaceutical products. In the capillary electrophoresis experiments, terbinafine was protonated and analyzed in the cationic form in less than 1 min. A linear range from 1.46 to 36.4 mu g mL(-1) in acetate buffer solution and a detection limit of 0.11 mu g mL(-1) were achieved. In the amperometric studies, terbinafine was oxidized at +0.85 V with high throughput (225 injection h(-1)) and good linear range (10-100 mu mol L-1). It was also possible to determine the antifungal agent using simultaneous conductometric and potentiometric titrations in the presence of 5% ethanol. The electrochemical methods were applied to the quantification of TerbHCl in different tablet samples; the results were comparable with values indicated by the manufacturer and those found using titrimetry according to the Pharmacopoeia. The electrochemical methods are simple, rapid and an appropriate alternative for quantifying this drug in real samples. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Data visualization techniques are powerful in the handling and analysis of multivariate systems. One such technique known as parallel coordinates was used to support the diagnosis of an event, detected by a neural network-based monitoring system, in a boiler at a Brazilian Kraft pulp mill. Its attractiveness is the possibility of the visualization of several variables simultaneously. The diagnostic procedure was carried out step-by-step going through exploratory, explanatory, confirmatory, and communicative goals. This tool allowed the visualization of the boiler dynamics in an easier way, compared to commonly used univariate trend plots. In addition it facilitated analysis of other aspects, namely relationships among process variables, distinct modes of operation and discrepant data. The whole analysis revealed firstly that the period involving the detected event was associated with a transition between two distinct normal modes of operation, and secondly the presence of unusual changes in process variables at this time.
Resumo:
[EN]The application of the Isogeometric Analysis (IA) with T-splines [1] demands a partition of the parametric space, C, in a tiling containing T-junctions denominated T-mesh. The T-splines are used both for the geometric modelization of the physical domain, D, and the basis of the numerical approximation. They have the advantage over the NURBS of allowing local refinement. In this work we propose a procedure to construct T-spline representations of complex domains in order to be applied to the resolution of elliptic PDE with IA. In precedent works [2, 3] we accomplished this task by using a tetrahedral parametrization…
Resumo:
The performances of the H → ZZ* → 4l analysis are studied in the context of the High Luminosity upgrade of the LHC collider, with the CMS detector. The high luminosity (up to L = 5 × 10^34 cm−2s−1) of the accelerator poses very challenging experimental con- ditions. In particular, the number of overlapping events per bunch crossing will increase to 140. To cope with this difficult environment, the CMS detector will be upgraded in two stages: Phase-I and Phase-II. The tools used in the analysis are the CMS Full Simulation and the fast parametrized Delphes simulation. A validation of Delphes with respect to the Full Simulation is performed, using reference Phase-I detector samples. Delphes is then used to simulate the Phase-II detector response. The Phase-II configuration is compared with the Phase-I detector and the same Phase-I detector affected by aging processes, both modeled with the Full Simulation framework. Conclusions on these three scenarios are derived: the degradation in performances observed with the “aged” scenario shows that a major upgrade of the detector is mandatory. The specific upgrade configuration studied allows to keep the same performances as in Phase-I and, in the case of the four-muons channel, even to exceed them.
Resumo:
In questa tesi vengono studiate alcune caratteristiche dei network a multiplex; in particolare l'analisi verte sulla quantificazione delle differenze fra i layer del multiplex. Le dissimilarita sono valutate sia osservando le connessioni di singoli nodi in layer diversi, sia stimando le diverse partizioni dei layer. Sono quindi introdotte alcune importanti misure per la caratterizzazione dei multiplex, che vengono poi usate per la costruzione di metodi di community detection . La quantificazione delle differenze tra le partizioni di due layer viene stimata utilizzando una misura di mutua informazione. Viene inoltre approfondito l'uso del test dell'ipergeometrica per la determinazione di nodi sovra-rappresentati in un layer, mostrando l'efficacia del test in funzione della similarita dei layer. Questi metodi per la caratterizzazione delle proprieta dei network a multiplex vengono applicati a dati biologici reali. I dati utilizzati sono stati raccolti dallo studio DILGOM con l'obiettivo di determinare le implicazioni genetiche, trascrittomiche e metaboliche dell'obesita e della sindrome metabolica. Questi dati sono utilizzati dal progetto Mimomics per la determinazione di relazioni fra diverse omiche. Nella tesi sono analizzati i dati metabolici utilizzando un approccio a multiplex network per verificare la presenza di differenze fra le relazioni di composti sanguigni di persone obese e normopeso.
Resumo:
Il cervello umano è composto da una rete complessa, formata da fasci di assoni, che connettono le diverse aree cerebrali. Il fascio arcuato collega l’area imputata alla com- prensione del linguaggio con quella dedicata alla sua produzione. Il fascio arcuato è presente in entrambi gli emisferi cerebrali, anche se spesso è utilizzato prevalente- mente il sinistro. In questa tesi sono state valutate, in un campione di soggetti sani, le differenze tra fascio arcuato destro e sinistro, utilizzando la trattografia, metodica avanzata e non invasiva che permette la ricostruzione della traiettoria delle fibre con immagini RM (Risonanza Magnetica) pesate in diffusione. A questo scopo ho utilizzato un algoritmo probabilistico, che permette la stima di probabilità di connessione della fibra in oggetto con le diverse aree cerebrali, anche nelle sedi di incrocio con fibre di fasci diversi. Grazie all’implementazione di questo metodo, è stato possibile ottenere una ricostruzione accurata del fascio arcuato, an- che nell’emisfero destro dove è spesso critica, tanto da non essere possibile con altri algoritmi trattografici. Parametrizzando poi la geometria del tratto ho diviso il fascio arcuato in venti seg- menti e ho confrontato i parametri delle misure di diffusione, valutate nell’emisfero destro e sinistro. Da queste analisi emerge un’ampia variabilità nella geometria dell’arcuato, sia tra diversi soggetti che diversi emisferi. Nell’emisfero destro l’arcuato incrocia maggiormente fibre appartenenti ad altri fasci. Nell’emisfero sinistro le fibre dell’arcuato sono più compatte e si misura anche una maggiore connettività con altre aree del cervello coinvolte nelle funzioni linguistiche. Nella seconda fase dello studio ho applicato la stessa metodica in due pazienti con lesioni cerebrali, con l’obiettivo di testare il danno del fascio arcuato ipsilaterale alla lesione e stimare se nell’emisfero controlaterale si innescassero meccanismi di plastic- ità strutturale. Questa metodica può essere implementata, in un gruppo di pazienti omogenei, per identificare marcatori RM diagnostici nella fase di pianificazione pre- chirurgica e marcatori RM prognostici di recupero funzionale del linguaggio.