912 resultados para non-process elements
Resumo:
Environmental decay in porous masonry materials, such as brick and mortar, is a widespread problem concerning both new and historic masonry structures. The decay mechanisms are quite complex dependng upon several interconnected parameters and from the interaction with the specific micro-climate. Materials undergo aesthetical and substantial changes in character but while many studies have been carried out, the mechanical aspect has been largely understudied while it bears true importance from the structural viewpoint. A quantitative assessment of the masonry material degradation and how it affects the load-bearing capacity of masonry structures appears missing. The research work carried out, limiting the attention to brick masonry addresses this issue through an experimental laboratory approach via different integrated testing procedures, both non-destructive and mechanical, together with monitoring methods. Attention was focused on transport of moisture and salts and on the damaging effects caused by the crystallization of two different salts, sodium chloride and sodium sulphate. Many series of masonry specimens, very different in size and purposes were used to track the damage process since its beginning and to monitor its evolution over a number of years Athe same time suitable testing techniques, non-destructive, mini-invasive, analytical, of monitoring, were validated for these purposes. The specimens were exposed to different aggressive agents (in terms of type of salt, of brine concentration, of artificial vs. open-air natural ageing, …), tested by different means (qualitative vs. quantitative, non destructive vs. mechanical testing, punctual vs. wide areas, …), and had different size (1-, 2-, 3-header thick walls, full-scale walls vs. small size specimens, brick columns and triplets vs. small walls, masonry specimens vs. single units of brick and mortar prisms, …). Different advanced testing methods and novel monitoring techniques were applied in an integrated holistic approach, for quantitative assessment of masonry health state.
Resumo:
Bisher ist bei forensischen Untersuchungen von Explosionen die Rückverfolgung der verwendeten Sprengstoffe begrenzt, da das Material in aller Regel bei der Explosion zerstört wird. Die Rückverfolgung von Sprengstoffen soll mit Hilfe von Identifikations-Markierungssubstanzen erleichtert werden. Diese stellen einen einzigartigen Code dar, der auch nach einer Sprengung wiedergefunden und identifiziert werden kann. Die dem Code zugeordneten, eindeutigen Informationen können somit ausgelesen werden und liefern der Polizei bei der Aufklärung weitere Ansätze.rnZiel der vorliegenden Arbeit ist es, das Verhalten von ausgewählten Seltenerdelementen (SEE) bei Explosion zu untersuchen. Ein auf Lanthanoidphosphaten basierender Identifikations-Markierungsstoff bietet die Möglichkeit, verschiedene Lanthanoide innerhalb eines einzelnen Partikels zu kombinieren, wodurch eine Vielzahl von Codes generiert werden kann. Somit kann eine Veränderung der Ausgangszusammensetzung des Codes auch nach einer Explosion durch die Analyse eines einzelnen Partikels sehr gut nachvollzogen und somit die Eignung des Markierungsstoffes untersucht werden. Eine weitere Zielsetzung ist die Überprüfung der Anwendbarkeit der Massenspektrometrie mit induktiv gekoppeltem Plasma (ICP-MS) und Partikelanalyse mittels Rasterelektronenmikroskopie (REM) für die Analyse der versprengten Identifikations-Markierungssubstanzen. rnDie Ergebnisbetrachtungen der ICP-MS-Analyse und REM-Partikelanalyse deuten zusammenfassend auf eine Fraktionierung der untersuchten Lanthanoide oder deren Umsetzungsprodukte nach Explosion in Abhängigkeit ihrer thermischen Belastbarkeit. Die Befunde zeigen eine Anreicherung der Lanthanoide mit höherer Temperaturbeständigkeit in größeren Partikeln, was eine Anreicherung von Lanthanoiden mit niedrigerer Temperaturbeständigkeit in kleineren Partikeln impliziert. Dies lässt sich in Ansätzen durch einen Fraktionierungsprozess in Abhängigkeit der Temperaturstabilität der Lanthanoide oder deren Umsetzungsprodukten erklären. Die der Fraktionierung zugrunde liegenden Mechanismen und deren gegenseitige Beeinflussung bei einer Explosion konnten im Rahmen dieser Arbeit nicht abschließend geklärt werden.rnDie generelle Anwendbarkeit und unter Umständen notwendige, komplementäre Verwendung der zwei Methoden ICP-MS und REM-Partikelanalyse wird in dieser Arbeit gezeigt. Die ICP-MS stellt mit großer untersuchter Probenfläche und hoher Genauigkeit eine gute Methode zur Charakterisierung der Konzentrationsverhältnisse der untersuchten Lanthanoide dar. Die REM-Partikelanalyse hingegen ermöglicht im Falle von Kontamination der Proben mit anderen Lanthanoid-haltigen Partikeln eine eindeutige Differenzierung der Elementvergesellschaftung pro Partikel. Sie kann somit im Gegensatz zur ICP-MS Aufschluss über die Art und Zusammensetzung der Kontamination geben. rnInnerhalb der vorgenommenen Untersuchungen stellte die bei der ICP-MS angewandte Probennahmetechnik eine ideale Art der Probennahme dar. Bei anderen Oberflächen könnte diese jedoch in Folge der in verschiedenen Partikelgrößen resultierenden Fraktionierung zu systematisch verfälschten Ergebnissen führen. Um die generelle Anwendbarkeit der ICP-MS im Hinblick auf die Analyse versprengter Lanthanoide zu gewährleisten, sollte eine Durchführung weiterer Sprengungen auf unterschiedlichen Probenoberflächen erfolgen und gegebenenfalls weitere Probennahme-, Aufschluss- und Anreicherungsverfahren evaluiert werden.rn
Resumo:
La ricerca si propone di analizzare una di quelle stagioni architettoniche controverse e lontane dalle internazionali strade maestre del nascente Neues Bauen: il romanticismo-nazionale svedese riletto attraverso l’esperienza del suo massimo esponente, Ragnar Östberg (1866-1945). L’obiettivo della tesi non è solamente quello di una revisione della critica storiografica, facendo così luce su una di quelle personalità considerate marginali, quanto quello di ricavare dalla lettura comparata di due tra i suoi progetti, fino ad ora mai indagati, quegli elementi che fanno dell’architettura un “fatto urbano” in cui la collettività può riconoscersi e parallelamente un fatto di rappresentazione della stessa. L’arcipelago di Stoccolma e quel processo di “renovatio urbis” a cui fu sottoposta proprio agli albori del XX secolo furono gli scenari in cui presero vita i due progetti: il complesso formato dallo Stockholms Stadshuset e la vicina parte mai realizzata del Nämndhuset, e villa Geber. Condensano due dimensioni che la città immersa nel paesaggio contiene: la natura urbana dell’edificio municipale e quella domestica della villa urbana isolata. La ricerca intesse un itinerario di disvelamento attraverso una matrice duale di lettura: “genius loci” e memorie urbane. I capitoli cercano di dimostrare come i due casi-studio siano espressione di quella pendolarità di ricerca tra lo spirito del luogo e le rimembranze delle forme urbane della tradizione. Questa analisi ci conduce in un viaggio alla ricerca dell’atlante delle “memorie urbane”, raccolte nei viaggi e nella formazione, comprendendo così il mondo analogico di riferimenti culturali con altre architetture europee della tradizione. I due progetti sorgono in opposte aree di espansione di Stoccolma e, pur nella loro diversità di scala, sono chiara espressione di appropriatezza al luogo e di strutture formali analoghe. Stockholm Stadshuset-Nämndhuset e villa Geber esprimono il metodo di Östberg, dove i riferimenti raccolti dall’imagination passive sono tramutati ed assemblati grazie alla imagination active.
Resumo:
The lattice formulation of Quantum ChromoDynamics (QCD) has become a reliable tool providing an ab initio calculation of low-energy quantities. Despite numerous successes, systematic uncertainties, such as discretisation effects, finite-size effects, and contaminations from excited states, are inherent in any lattice calculation. Simulations with controlled systematic uncertainties and close to the physical pion mass have become state-of-the-art. We present such a calculation for various hadronic matrix elements using non-perturbatively O(a)-improved Wilson fermions with two dynamical light quark flavours. The main topics covered in this thesis are the axial charge of the nucleon, the electro-magnetic form factors of the nucleon, and the leading hadronic contributions to the anomalous magnetic moment of the muon. Lattice simulations typically tend to underestimate the axial charge of the nucleon by 5 − 10%. We show that including excited state contaminations using the summed operator insertion method leads to agreement with the experimentally determined value. Further studies of systematic uncertainties reveal only small discretisation effects. For the electro-magnetic form factors of the nucleon, we see a similar contamination from excited states as for the axial charge. The electro-magnetic radii, extracted from a dipole fit to the momentum dependence of the form factors, show no indication of finite-size or cutoff effects. If we include excited states using the summed operator insertion method, we achieve better agreement with the radii from phenomenology. The anomalous magnetic moment of the muon can be measured and predicted to very high precision. The theoretical prediction of the anomalous magnetic moment receives contribution from strong, weak, and electro-magnetic interactions, where the hadronic contributions dominate the uncertainties. A persistent 3σ tension between the experimental determination and the theoretical calculation is found, which is considered to be an indication for physics beyond the Standard Model. We present a calculation of the connected part of the hadronic vacuum polarisation using lattice QCD. Partially twisted boundary conditions lead to a significant improvement of the vacuum polarisation in the region of small momentum transfer, which is crucial in the extraction of the hadronic vacuum polarisation.
Resumo:
Large-scale structures can be considered an interesting and useful "laboratory" to better investigate the Universe; in particular the filaments connecting clusters and superclusters of galaxies can be a powerful tool for this intent, since they are not virialised systems yet. The large structures in the Universe have been studied in different bands, in particular the present work takes into consideration the emission in the radio band. In the last years both compact and diffuse radio emission have been detected, revealing to be associated to single objects and clusters of galaxies respectively. The detection of these sources is important, because the radiation process is the synchrotron emission, which in turn is linked to the presence of a magnetic field: therefore studying these radio sources can help in investigating the magnetic field which permeates different portions of space. Furthermore, radio emission in optical filaments have been detected recently, opening new chances to further improve the understanding of structure formation. Filaments can be seen as the net which links clusters and superclusters. This work was made with the aim of investigating non-thermal properties in low-density regions, looking for possible filaments associated to the diffuse emission. The analysed sources are 0917+75, which is located at a redshift z = 0.125, and the double cluster system A399-A401, positioned at z = 0.071806 and z = 0.073664 respectively. Data were taken from VLA/JVLA observations, and reduced and calibrated with the package AIPS, following the standard procedure. Isocountour and polarisation maps were yielded, allowing to derive the main physical properties. Unfortunately, because of a low quality data for A399-A401, it was not possible to see any radio halo or bridge.
Resumo:
Wir betrachten Systeme von endlich vielen Partikeln, wobei die Partikel sich unabhängig voneinander gemäß eindimensionaler Diffusionen [dX_t = b(X_t),dt + sigma(X_t),dW_t] bewegen. Die Partikel sterben mit positionsabhängigen Raten und hinterlassen eine zufällige Anzahl an Nachkommen, die sich gemäß eines Übergangskerns im Raum verteilen. Zudem immigrieren neue Partikel mit einer konstanten Rate. Ein Prozess mit diesen Eigenschaften wird Verzweigungsprozess mit Immigration genannt. Beobachten wir einen solchen Prozess zu diskreten Zeitpunkten, so ist zunächst nicht offensichtlich, welche diskret beobachteten Punkte zu welchem Pfad gehören. Daher entwickeln wir einen Algorithmus, um den zugrundeliegenden Pfad zu rekonstruieren. Mit Hilfe dieses Algorithmus konstruieren wir einen nichtparametrischen Schätzer für den quadrierten Diffusionskoeffizienten $sigma^2(cdot),$ wobei die Konstruktion im Wesentlichen auf dem Auffüllen eines klassischen Regressionsschemas beruht. Wir beweisen Konsistenz und einen zentralen Grenzwertsatz.
Resumo:
The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.
Resumo:
Il collasso di diverse colonne, caratterizzate da danneggiamenti simili, quali ampie fessure fortemente inclinate ad entrambe le estremità dell’elemento, lo schiacciamento del calcestruzzo e l’instabilità dei ferri longitudinali, ha portato ad interrogarsi riguardo gli effetti dell’interazione tra lo sforzo normale, il taglio ed il momento flettente. Lo studio è iniziato con una ricerca bibliografica che ha evidenziato una sostanziale carenza nella trattazione dell’argomento. Il problema è stato approcciato attraverso una ricerca di formule della scienza delle costruzioni, allo scopo di mettere in relazione lo sforzo assiale, il taglio ed il momento; la ricerca si è principalmente concentrata sulla teoria di Mohr. In un primo momento è stata considerata l’interazione tra solo due componenti di sollecitazione: sforzo assiale e taglio. L’analisi ha condotto alla costruzione di un dominio elastico di taglio e sforzo assiale che, confrontato con il dominio della Modified Compression Field Theory, trovata tramite ricerca bibliografica, ha permesso di concludere che i risultati sono assolutamente paragonabili. L’analisi si è poi orientata verso l’interazione tra sforzo assiale, taglio e momento flettente. Imponendo due criteri di rottura, il raggiungimento della resistenza a trazione ed a compressione del calcestruzzo, inserendo le componenti di sollecitazione tramite le formule di Navier e Jourawsky, sono state definite due formule che mettono in relazione le tre azioni e che, implementate nel software Matlab, hanno permesso la costruzione di un dominio tridimensionale. In questo caso non è stato possibile confrontare i risultati, non avendo la ricerca bibliografica mostrato niente di paragonabile. Lo studio si è poi concentrato sullo sviluppo di una procedura che tenta di analizzare il comportamento di una sezione sottoposta a sforzo normale, taglio e momento: è stato sviluppato un modello a fibre della sezione nel tentativo di condurre un calcolo non lineare, corrispondente ad una sequenza di analisi lineari. La procedura è stata applicata a casi reali di crollo, confermando l’avvenimento dei collassi.
Resumo:
Seismic assessment and seismic strengthening are the key issues need to be figured out during the process of protection and reusing of historical buildings. In this thesis the seismic behaviors of the hinged steel structure, a typical structure of historical buildings, i.e. hinged steel frames in Shanghai, China, were studied based on experimental investigations and theoretic analysis. How the non-structural members worked with the steel frames was analyzed thoroughly. Firstly, two 1/4 scale hinged steel frames were constructed based on the structural system of Bund 18, a historical building in Shanghai: M1 model without infill walls, M2 model with infill walls, and tested under the horizontal cyclic loads to investigate their seismic behavior. The Shaking Table Test and its results indicated that the seismic behavior of the hinged steel frames could be improved significantly with the help of non-structural members, i.e., surrounding elements outside the hinged steel frames and infilled walls. To specify, the columns are covered with bricks, they consist of I shape formed steel sections and steel plates, which are clenched together. The steel beams are connected to the steel column by steel angle, thus the structure should be considered as a hinged frame. And the infilled wall acted as a compression diagonal strut to withstand the horizontal load, therefore, the seismic capacity and stiffness of the hinged steel frames with infilled walls could be estimated by using the equivalent compression diagonal strut model. A SAP model has been constructed with the objective to perform a dynamic nonlinear analysis. The obtained results were compared with the results obtained from Shaking Table Test. The Test Results have validated that the influence of infill walls on seismic behavior can be estimated by using the equivalent diagonal strut model.
Resumo:
A comparison between main design methods for unpaved roads is presented in this paper. An unpaved road is made up of an unbound aggregate base course lying on a usually weak subgrade. A geosynthetic might be put between the two in reinforcing and separating function. The goal of a design method is to find the appropriate thickness of the base course knowing at least traffic volume, wheel load, tire pressure, undrained cohesion of the subgrade, allowable rut depth and influence of the reinforcement. Geosynthetics can reduce the thickness or the quality of aggregate required and improve the durability of an unpaved road. Geotextiles contribute to save aggregate through interaction friction and separation, while geogrids through interlocking between his apertures and lithic base elements. In the last chapter a case study is discussed and design thicknesses with two design methods for the three possible cases (i.e. unreinforced, geotextile reinforced, geogrid reinforced) are calculated.
Resumo:
Background Patients often establish initial contact with healthcare institutions by telephone. During this process they are frequently medically triaged. Purpose To investigate the safety of computer-assisted telephone triage for walk-in patients with non-life-threatening medical conditions at an emergency unit of a Swiss university hospital. Methods This prospective surveillance study compared the urgency assessments of three different types of personnel (call centre nurses, hospital physicians, primary care physicians) who were involved in the patients' care process. Based on the urgency recommendations of the hospital and primary care physicians, cases which could potentially have resulted in an avoidable hazardous situation (AHS) were identified. Subsequently, the records of patients with a potential AHS were assessed for risk to health or life by an expert panel. Results 208 patients were enrolled in the study, of whom 153 were assessed by all three types of personnel. Congruence between the three assessments was low. The weighted κ values were 0.115 (95% CI 0.038 to 0.192) (hospital physicians vs call centre), 0.159 (95% CI 0.073 to 0.242) (primary care physicians vs call centre) and 0.377 (95% CI 0.279 to 0.480) (hospital vs primary care physicians). Seven of 153 cases (4.57%; 95% CI 1.85% to 9.20%) were classified as a potentially AHS. A risk to health or life was adjudged in one case (0.65%; 95% CI 0.02% to 3.58%). Conclusion Medical telephone counselling is a demanding task requiring competent specialists with dedicated training in communication supported by suitable computer technology. Provided these conditions are in place, computer-assisted telephone triage can be considered to be a safe method of assessing the potential clinical risks of patients' medical conditions.
Resumo:
Introduction Acute hemodynamic instability increases morbidity and mortality. We investigated whether early non-invasive cardiac output monitoring enhances hemodynamic stabilization and improves outcome. Methods A multicenter, randomized controlled trial was conducted in three European university hospital intensive care units in 2006 and 2007. A total of 388 hemodynamically unstable patients identified during their first six hours in the intensive care unit (ICU) were randomized to receive either non-invasive cardiac output monitoring for 24 hrs (minimally invasive cardiac output/MICO group; n = 201) or usual care (control group; n = 187). The main outcome measure was the proportion of patients achieving hemodynamic stability within six hours of starting the study. Results The number of hemodynamic instability criteria at baseline (MICO group mean 2.0 (SD 1.0), control group 1.8 (1.0); P = .06) and severity of illness (SAPS II score; MICO group 48 (18), control group 48 (15); P = .86)) were similar. At 6 hrs, 45 patients (22%) in the MICO group and 52 patients (28%) in the control group were hemodynamically stable (mean difference 5%; 95% confidence interval of the difference -3 to 14%; P = .24). Hemodynamic support with fluids and vasoactive drugs, and pulmonary artery catheter use (MICO group: 19%, control group: 26%; P = .11) were similar in the two groups. The median length of ICU stay was 2.0 (interquartile range 1.2 to 4.6) days in the MICO group and 2.5 (1.1 to 5.0) days in the control group (P = .38). The hospital mortality was 26% in the MICO group and 21% in the control group (P = .34). Conclusions Minimally-invasive cardiac output monitoring added to usual care does not facilitate early hemodynamic stabilization in the ICU, nor does it alter the hemodynamic support or outcome. Our results emphasize the need to evaluate technologies used to measure stroke volume and cardiac output--especially their impact on the process of care--before any large-scale outcome studies are attempted.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Many metabolites in the proton magnetic resonance spectrum undergo magnetization exchange with water, such as those in the downfield region (6.0-8.5 ppm) and the upfield peaks of creatine, which can be measured to reveal additional information about the molecular environment. In addition, these resonances are attenuated by conventional water suppression techniques complicating detection and quantification. To characterize these metabolites in human skeletal muscle in vivo at 3 T, metabolite cycled non-water-suppressed spectroscopy was used to conduct a water inversion transfer experiment in both the soleus and tibialis anterior muscles. Resulting median exchange-independent T(1) times for the creatine methylene resonances were 1.26 and 1.15 s, and for the methyl resonances were 1.57 and 1.74 s, for soleus and tibialis anterior muscles, respectively. Magnetization transfer rates from water to the creatine methylene resonances were 0.56 and 0.28 s(-1) , and for the methyl resonances were 0.39 and 0.30 s(-1) , with the soleus exhibiting faster transfer rates for both resonances, allowing speculation about possible influences of either muscle fibre orientation or muscle composition on the magnetization transfer process. These water magnetization transfer rates observed without water suppression are in good agreement with earlier reports that used either postexcitation water suppression in rats, or short CHESS sequences in human brain and skeletal muscle.
Resumo:
Delayed fracture healing and non-unions represent rare but severe complications in orthopedic surgery. Further knowledge on the mechanisms of the bone repair process and of the development of a pseudoarthrosis is essential to predict and prevent impaired healing of fractures. The present study aimed at elucidating differences in gene expression during the repair of rigidly and non-rigidly fixed osteotomies. For this purpose, the MouseFix™ and the FlexiPlate™ systems (AO Development Institute, Davos, CH), allowing the creation of well defined osteotomies in mouse femora, were employed. A time course following the healing process of the osteotomy was performed and bones and periimplant tissues were analyzed by high-resolution X-ray, MicroCT and by histology. For the assessment of gene expression, Low Density Arrays (LDA) were done. In animals with rigid fixation, X-ray and MicroCT revealed healing of the osteotomy within 3 weeks. Using the FlexiPlate™ system, the osteotomy was still visible by X-ray after 3 weeks and a stabilizing cartilaginous callus was formed. After 4.5 weeks, the callus was remodeled and the osteotomy was, on a histological level, healed. Gene expression studies revealed levels of transcripts encoding proteins associated with inflammatory processes not to be altered in tissues from bones with rigid and non-rigid fixation, respectively. Levels of transcripts encoding proteins of the extracellular matrix and essential for bone cell functions were not increased in the rigidly fixed group when compared to controls without osteotomy. In the FlexiPlate™ group, levels of transcripts encoding the same set of genes were significantly increased 3 weeks after surgery. Expression of transcripts encoding BMPs and BMP antagonists was increased after 3 weeks in repair tissues from bones fixed with FlexiPlate™, as were inhibitors of the WNT signaling pathways. Little changes only were detected in transcript levels of tissues from rigidly fixed bones. The data of the present study suggest that rigid fixation enables accelerated healing of an experimental osteotomy as compared to non-rigid fixation. The changes in the healing process after non-rigid fixation are accompanied by an increase in the levels of transcripts encoding inhibitors of osteogenic pathways and, probably as a consequence, by temporal changes in bone matrix synthesis.