912 resultados para non-process elements


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose of this research is to deepen the study on the section in architecture. The survey aims as important elements in the project Teatro Domestico by Aldo Rossi built for the XVII Triennale di Milano in 1986 and, through the implementation on several topics of architecture, verify the timeliness and fertility in the new compositional exercises. Through the study of certain areas of the Rossi’s theory we tried to find a common thread for the reading of the theater project. The theater is the place of the ephemeral and the artificial, which is why his destiny is the end and the fatal loss. The design and construction of theater setting has always had a double meaning between the value of civil architecture and testing of new technologies available. Rossi's experience in this area are clear examples of the inseparable relationship between the representation of architecture as art and design of architecture as a model of reality. In the Teatro Domestico, the distinction between representation and the real world is constantly canceled and returned through the reversal of the meaning and through the skip of scale. At present, studies conducted on the work of Rossi concern the report that the architectural composition is the theory of form, focusing compositional development of a manufacturing process between the typological analysis and form invention. The research, through the analysis of some projects few designs, will try to analyze this issue through the rules of composition both graphical and concrete construction, hoping to decipher the mechanism underlying the invention. The almost total lack of published material on the project Teatro Domestico and the opportunity to visit the archives that preserve the drawings, has allowed the author of this study to deepen the internal issues in the project, thus placing this search as a first step toward possible further analysis on the works of Rossi linked to performance world. The final aim is therefore to produce material that can best describe the work of Rossi. Through the reading of the material published by the same author and the vision of unpublished material preserved in the archives, it was possible to develop new material and increasing knowledge about the work, otherwise difficult to analyze. The research is divided into two groups. The first, taking into account the close relationship most frequently mentioned by Rossi himself between archeology and architectural composition, stresses the importance of tipo such as urban composition reading system as well as open tool of invention. Resuming Ezio Bonfanti’s essay on the work of the architect we wanted to investigate how the paratactic method is applied to the early work conceived and, subsequently as the process reaches a complexity accentuated, while keeping stable the basic terms. Following a brief introduction related to the concept of the section and the different interpretations that over time the term had, we tried to identify with this facility a methodology for reading Rossi’s projects. The result is a constant typological interpretation of the term, not only related to the composition in plant but also through the elevation plans. The section is therefore intended as the overturning of such elevation is marked on the same plane of the terms used, there is a different approach, but a similarity of characters. The identification of architectural phonemes allows comparison with other arts. The research goes in the direction of language trying to identify the relationship between representation and construction, between the ephemeral and the real world. In this sense it will highlight the similarities between the graphic material produced by Ross and some important examples of contemporary author. The comparison between the composition system with the surrealist world of painting and literature will facilitate the understanding and identification of possible rules applied by Rossi. The second part of the research is characterized by a focus on the intent of the project chosen. Teatro Domestico embodies a number of elements that seem to conclude (assuming an end point but also to start) a curriculum author. With it, the experiments carried out on the theater started with the project for the Teatrino Scientifico (1978) through the project for the Teatro del Mondo (1979), into a Laic Tabernacle representative collective and private memory of the city. Starting from a reading of the draft, through the collection of published material, we’ve made an analysis on the explicit themes of the work, finding the conceptual references. Following the taking view of the original materials not published kept at Aldo Rossi's Archive Collection of the Canadian Center for Architecture in Montréal, will be implemented through the existing techniques for digital representation, a virtual reconstruction of the project, adding little to the material, a new element for future studies. The reconstruction is part of a larger research studies where the current technologies of composition and representation in architecture stand side by side with research on the method of composition of this architect. The results achieved are in addition to experiences in the past dealt with the reconstruction of some of the lost works of Aldo Rossi. A partial objective is to reactivate a discourse around this work is considered non-principal, among others born in the prolific activities. Reassessment of development projects which would bring the level of ephemeral works most frequented by giving them the value earned. In conclusion, the research aims to open a new field of interest on the part not only as a technical instrument of representation of an idea but as an actual mechanism through which composition is formed and the idea is developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le tecniche dell'informazione e i metodi della comunicazione hanno modificato il modo di redigere documenti destinati a trasmettere la conoscenza, in un processo che è a tutt'oggi in corso di evoluzione. Anche l'attività progettuale in ingegneria ed architettura, pure in un settore caratterizzato da una notevole inerzia metodologica e restio all'innovazione quale è quello dell'industria edilizia, ha conosciuto profonde trasformazioni in ragione delle nuove espressioni tecnologiche. Da tempo l'informazione necessaria per realizzare un edificio, dai disegni che lo rappresentano sino ai documenti che ne indicano le modalità costruttive, può essere gestita in maniera centralizzata mediante un unico archivio di progetto denominato IPDB (Integrated Project DataBase) pur essendone stata recentemente introdotta sul mercato una variante più operativa chiamata BIM (Building Information Modelling). Tuttavia l'industrializzazione del progetto che questi strumenti esplicano non rende conto appieno di tutti gli aspetti che vedono la realizzazione dell'opera architettonica come collettore di conoscenze proprie di una cultura progettuale che, particolarmente in Italia, è radicata nel tempo. La semantica della rappresentazione digitale è volta alla perequazione degli elementi costitutivi del progetto con l'obiettivo di catalogarne le sole caratteristiche fabbricative. L'analisi della letteratura scientifica pertinente alla materia mostra come non sia possibile attribuire ai metodi ed ai software presenti sul mercato la valenza di raccoglitori omnicomprensivi di informazione: questo approccio olistico costituisce invece il fondamento della modellazione integrata intesa come originale processo di rappresentazione della conoscenza, ordinata secondo il paradigma delle "scatole cinesi", modello evolvente che unifica linguaggi appartenenti ai differenti attori compartecipanti nei settori impiantistici, strutturali e della visualizzazione avanzata. Evidenziando criticamente i pregi e i limiti operativi derivanti dalla modellazione integrata, la componente sperimentale della ricerca è stata articolata con l'approfondimento di esperienze condotte in contesti accademici e professionali. Il risultato conseguito ha coniugato le tecniche di rilevamento alle potenzialità di "modelli tridimensionali intelligenti", dotati cioè di criteri discriminanti per la valutazione del relazionamento topologico dei componenti con l'insieme globale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde eine neue Methode für einen empfindlichen und isotopenselektiven Elementnachweis entwickelt. Unter Einsatz von Laserablation geschieht der Probenaufschluß direkt und mit einer Ortsauflösung von unter 30 m. Hierzu wurde ein hochauflösendes MALDI-TOF-Massenspektrometer, welches üblicherweise für biochemische Fragestellungen eingesetzt wird, mit einem spektroskopischen Aufbau zur resonanten Ionisation von Elementgehalten modifiziert. Die Methode ist somit insbesondere für die Untersuchung von Elementspuren in Festkörperproben mit mikroskopischer Struktur konzipiert. Methodische Entwicklungsarbeiten wurden anhand des Elements Gadolinium durchgeführt. Durch die Verwendung gepulster Farbstofflaser stehen ausreichend hohe Laserfelder zur Verfügung, um unabhängig von Hyperfeinstruktur und Isotopieverschiebung Übergänge aller Isotope im Rahmen des Resonanzionisations-Verfahrens zu sättigen. Darauf konnte eine Isotopenverhältnisanalyse mit einer Genauigkeit im Prozentbereich verwirklicht werden. Verschiedene Anregungsleitern wurden untersucht, und mit elementspezifischen Resonanzüberhöhungen bis zu zwei Größenordnungen über dem nicht-resonant gebildeten Untergrund konnte eine Nachweiseffizienz von über 10-4 (entsprechend sub-fg/g-Niveau) erzielt werden. Dazu wurden Simulationsrechnungen zum atomaren Sättigungsverhalten in starken resonanten Laserfeldern durchgeführt. Erste Anwendungen des Laserablationsverfahrens waren Proben kosmologischer Herkunft. Der physikalische Prozeß der Laserablation bei Metallen wurde unter Hochvakuum-Bedingung systematisch in Abhängigkeit der Laserfluenz untersucht. In der ablatierten Plasmaphase erwies sich der Neutralanteil als besonders geeignet für geschwindigkeitsselektive Laserionisations-Messungen. Eine bimodale Struktur wurde beobachtet, bestehend aus einer thermischen und einer schockwellen-induzierten Komponente. Der ionische Anteil der ablatierten Dampfphase konnte über variable elektrische Feldpulse untersucht werden. Laserablation unter Atmosphärenbedingung wurde an einem beschichteten Messingtarget untersucht. Dabei wurde die Entstehung von permanenten Oberflächenstrukturen beobachtet, welche sich durch Nichtgleichgewichts-Prozesse in der Dampfphase erklären lassen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Micro-opto-mechanical systems (MOMS) based technology for the fabrication of ultrasonic probes on optical fiber is presented. Thanks to the high miniaturization level reached, the realization of an ultrasonic system constituted by ultrasonic generating and detecting elements, suitable for minimally invasive applications or Non Destructive Evaluation (NDE) of materials at high resolution, is demonstrated. The ultrasonic generation is realized by irradiating a highly absorbing carbon film patterned on silicon micromachined structures with a nanosecond pulsed laser source, generating a mechanical shock wave due to the thermal expansion of the film induced by optical energy conversion into heat. The short duration of the pulsed laser, together with an appropriate emitter design, assure high frequency and wide band ultrasonic generation. The acoustic detection is also realized on a MOMS device using an interferometric receiver, fabricated with a Fabry-Perot optical cavity realized by means of a patterned SU-8 and two Al metallization levels. In order to detect the ultrasonic waves, the cavity is interrogated by a laser beam measuring the reflected power with a photodiode. Various issues related to the design and fabrication of these acoustic probes are investigated in this thesis. First, theoretical models are developed to characterize the opto-acoustic behavior of the devices and estimate their expected acoustic performances. Tests structures are realized to derive the relevant physical parameters of the materials constituting the MOMS devices and determine the conditions theoretically assuring the best acoustic emission and detection performances. Moreover, by exploiting the models and the theoretical results, prototypes of acoustic probes are designed and their fabrication process developed by means of an extended experimental activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sterne mit einer Anfangsmasse zwischen etwa 8 und 25 Sonnenmassen enden ihre Existenz mit einer gewaltigen Explosion, einer Typ II Supernova. Die hierbei entstehende Hoch-Entropie-Blase ist ein Bereich am Rande des sich bildenden Neutronensterns und gilt als möglicher Ort für den r-Prozess. Wegen der hohen Temperatur T innerhalb der Blase ist die Materie dort vollkommen photodesintegriert. Das Verhältnis von Neutronen zu Protonen wird durch die Elektronenhäufigkeit Ye beschrieben. Die thermodynamische Entwicklung des Systems wird durch die Entropie S gegeben. Da die Expansion der Blase schnell vonstatten geht, kann sie als adiabatisch betrachtet werden. Die Entropie S ist dann proportional zu T^3/rho, wobei rho die Dichte darstellt. Die explizite Zeitentwicklung von T und rho sowie die Prozessdauer hängen von Vexp, der Expansionsgeschwindigkeit der Blase, ab. Der erste Teil dieser Dissertation beschäftigt sich mit dem Prozess der Reaktionen mit geladenen Teilchen, dem alpha-Prozess. Dieser Prozess endet bei Temperaturen von etwa 3 mal 10^9 K, dem sogenannten "alpha-reichen" Freezeout, wobei überwiegend alpha-Teilchen, freie Neutronen sowie ein kleiner Anteil von mittelschweren "Saat"-Kernen im Massenbereich um A=100 gebildet werden. Das Verhältnis von freien Neutronen zu Saatkernen Yn/Yseed ist entscheidend für den möglichen Ablauf eines r-Prozesses. Der zweite Teil dieser Arbeit beschäftigt sich mit dem eigentlichen r-Prozess, der bei Neutronenanzahldichten von bis zu 10^27 Neutronen pro cm^3 stattfindet, und innerhalb von maximal 400 ms sehr neutronenreiche "Progenitor"-Isotope von Elementen bis zum Thorium und Uran bildet. Bei dem sich anschliessendem Ausfrieren der Neutroneneinfangreaktionen bei 10^9 K und 10^20 Neutronen pro cm^3 erfolgt dann der beta-Rückzerfall der ursprünglichen r-Prozesskerne zum Tal der Stabilität. Diese Nicht-Gleichgewichts-Phase wird in der vorliegenden Arbeit in einer Parameterstudie eingehend untersucht. Abschliessend werden astrophysikalische Bedingungen definiert, unter denen die gesamte Verteilung der solaren r-Prozess-Isotopenhäufigkeiten reproduziert werden können.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this proposal is to explain the paradigm of the American foreign policy during the Johnson Administration, especially toward Europe, within the NATO framework, and toward URSS, in the context of the détente, just emerged during the decade of the sixties. During that period, after the passing of the J. F. Kennedy, President L. B. Johnson inherited a complex and very high-powered world politics, which wanted to get a new phase off the ground in the transatlantic relations and share the burden of the Cold war with a refractory Europe. Known as the grand design, it was a policy that needed the support of the allies and a clear purpose which appealed to the Europeans. At first, President Johnson detected in the problem of the nuclear sharing the good deal to make with the NATO allies. At the same time, he understood that the United States needed to reassert their leadeship within the new stage of relations with the Soviet Union. Soon, the “transatlantic bargain” became something not so easy to dealt with. The Federal Germany wanted to say a word in the nuclear affairs and, why not, put the finger on the trigger of the atlantic nuclear weapons. URSS, on the other hand, wanted to keep Germany down. The other allies did not want to share the onus of the defense of Europe, at most the responsability for the use of the weapons and, at least, to participate in the decision-making process. France, which wanted to detach herself from the policy of the United States and regained a world role, added difficulties to the manage of this course of action. Through the years of the Johnson’s office, the divergences of the policies placed by his advisers to gain the goal put the American foreign policy in deep water. The withdrawal of France from the organization but not from the Alliance, give Washington a chance to carry out his goal. The development of a clear-cut disarm policy leaded the Johnson’s administration to the core of the matter. The Non-proliferation Treaty signed in 1968, solved in a business-like fashion the problem with the allies. The question of nuclear sharing faded away with the acceptance of more deep consultative role in the nuclear affairs by the allies, the burden for the defense of Europe became more bearable through the offset agreement with the FRG and a new doctrine, the flexible response, put an end, at least formally, to the taboo of the nuclear age. The Johnson’s grand design proved to be different from the Kennedy’s one, but all things considered, it was more workable. The unpredictable result was a real détente with the Soviet Union, which, we can say, was a merit of President Johnson.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cytochrome P450 1A1 (CYP1A1) monooxygenase plays an important role in the metabolism of environmental pollutants such as polycyclic aromatic hydrocarbons (PAHs) and halogenated polycyclic aromatic hydrocarbons (HAHs). Oxidation of these compounds converts them to the metabolites that subsequently can be conjugated to hydrophilic endogenous entities e.g. glutathione. Derivates generated in this way are water soluble and can be excreted in bile or urine, which is a defense mechanism. Besides detoxification, metabolism by CYP1A1 may lead to deleterious effects since the highly reactive intermediate metabolites are able to react with DNA and thus cause mutagenic effects, as it is in the case of benzo(a) pyrene (B[a]P). CYP1A1 is normally not expressed or expressed at a very low level in the cells but it is inducible by many PAHs and HAHs e.g. by B[a]P or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Transcriptional activation of the CYP1A1 gene is mediated by aryl hydrocarbon receptor (AHR), a basic-helix-loop-helix (bHLH) transcription factor. In the absence of a ligand AHR stays predominantly in the cytoplasm. Ligand binding causes translocation of AHR to the nuclear compartment, its heterodimerization with another bHLH protein, the aryl hydrocarbon nuclear translocator (ARNT) and binding of the AHR/ARNT heterodimer to a DNA motif designated dioxin responsive element (DRE). This process leads to the transcriptional activation of the responsive genes containing DREs in their regulatory regions, e.g. that coding for CYP1A1. TCDD is the most potent known agonist of AHR. Since it is not metabolized by the activated enzymes, exposure to this compound leads to a persisting activation of AHR resulting in diverse toxic effects in the organism. To enlighten the molecular mechanisms that mediate the toxicity of xenobiotics like TCDD and related compounds, the AHR-dependent regulation of the CYP1A1 gene was investigated in two cell lines: human cervix carcinoma (HeLa) and mouse hepatoma (Hepa). Study of AHR activation and its consequence concerning expression of the CYP1A1 enzyme confirmed the TCDD-dependent formation of the AHR/ARNT complex on DRE leading to an increase of the CYP1A1 transcription in Hepa cells. In contrast, in HeLa cells formation of the AHR/ARNT heterodimer and binding of a protein complex containing AHR and ARNT to DRE occurred naturally in the absence of TCDD. Moreover, treatment with TCDD did not affect the AHR/ARNT dimer formation and binding of these proteins to DRE in these cells. Even though the constitutive complex on DRE exists in HeLa, transcription of the CYP1A1 gene was not increased. Furthermore, the CYP1A1 level in HeLa cells remained unchanged in the presence of TCDD suggesting repressional mechanism of the AHR complex function which may hinder the TCDD-dependent mechanisms in these cells. Similar to the native, the mouse CYP1A1-driven reporter constructs containing different regulatory elements were not inducible by TCDD in HeLa cells, which supported a presence of cell type specific trans-acting factor in HeLa cells able to repress both the native CYP1A1 and CYP1A1-driven reporter genes rather than species specific differences between CYP1A1 genes of human and rodent origin. The different regulation of the AHR-mediated transcription of CYP1A1 gene in Hepa and HeLa cells was further explored in order to elucidate two aspects of the AHR function: (I) mechanism involved in the activation of AHR in the absence of exogenous ligand and (II) factor that repress function of the exogenous ligand-independent AHR/ARNT complex. Since preliminary studies revealed that the activation of PKA causes an activation of AHR in Hepa cells in the absence of TCDD, the PKA-dependent signalling pathway was the proposed endogenous mechanism leading to the TCDD-independent activation of AHR in HeLa cells. Activation of PKA by forskolin or db-cAMP as well as inhibition of the kinase by H89 in both HeLa and Hepa cells did not lead to alterations in the AHR interaction with ARNT in the absence of TCDD and had no effect on binding of these proteins to DRE. Moreover, the modulators of PKA did not influence the CYP1A1 activity in these cells in the presence and in the absence of TCDD. Thus, an involvement of PKA in the regulation of the CYP1A1 Gen in HeLa cells was not evaluated in the course of this study. Repression of genes by transcription factors bound to their responsive elements in the absence of ligands has been described for nuclear receptors. These receptors interact with protein complex containing histone deacetylase (HDAC), enzyme responsible for the repressional effect. Thus, a participation of histone deacetylase in the transcriptional modulation of CYP1A1 gene by the constitutively DNA-bound AHR/ARNT complex was supposed. Inhibition of the HDAC activity by trichostatin A (TSA) or sodium butyrate (NaBu) led to an increase of the CYP1A1 transcription in the presence but not in the absence of TCDD in Hepa and HeLa cells. Since amount of the AHR and ARNT proteins remained unchanged upon treatment of the cells with TSA or NaBu, the transcriptional upregulation of CYP1A1 gene was not due to an increased expression of the regulatory proteins. These findings strongly suggest an involvement of HDAC in the repression of the CYP1A1 gene. Similar to the native human CYP1A1 also the mouse CYP1A1-driven reporter gene transfected into HeLa cells was repressed by histone deacetylase since the presence of TSA or NaBu led to an increase in the reporter activity. Induction of reporter gene did not require a presence of the promoter or negative regulatory regions of the CYP1A1 gene. A promoter-distal fragment containing three DREs together with surrounding sequences was sufficient to mediate the effects of the HDAC inhibitors suggesting that the AHR/ARNT binding to its specific DNA recognition site may be important for the CYP1A1 repression. Histone deacetylase is recruited to the specific genes by corepressors, proteins that bind to the transcription factors and interact with other members of the HDAC complex. Western blot analyses revealed a presence of HDAC1 and the corepressors mSin3A (mammalian homolog of yeast Sin3) and SMRT (silencing mediator for retinoid and thyroid hormone receptor) in both cell types, while the corepressor NCoR (nuclear receptor corepressor) was expressed exclusively in HeLa cells. Thus the high inducibility of CYP1A1 in Hepa cells may be due to the absence of NCoR in these cells in contrast to the non-responsive HeLa cells, where the presence of NCoR would support repression of the gene by histone deacetylase. This hypothesis was verified in reporter gene experiments where expression constructs coding for the particular members of the HDAC complex were cotransfected in Hepa cells together with the TCDD-inducible reporter constructs containing the CYP1A1 regulatory sequences. An overexpression of NCoR however did not decrease but instead led to a slight increase of the reporter gene activity in the cells. The expected inhibition was observed solely in the case of SMRT that slightly reduced constitutive and TCDD-induced reporter gene activity. A simultaneous expression of NCoR and SMRT shown no further effects and coexpression of HDAC1 with the two corepressors did not alter this situation. Thus, additional factors that are likely involved in the repression of CYP1A1 gene by HDAC complex remained to be identified. Taking together, characterisation of an exogenous ligand independent AHR/ARNT complex on DRE in HeLa cells that repress transcription of the CYP1A1 gene creates a model system enabling investigation of endogenous processes involved in the regulation of AHR function. This study implicates HDAC-mediated repression of CYP1A1 gene that contributes to the xenobiotic-induced expression in a tissue specific manner. Elucidation of these processes gains an insight into mechanisms leading to deleterious effects of TCDD and related compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A broad variety of solid state NMR techniques were used to investigate the chain dynamics in several polyethylene (PE) samples, including ultrahigh molecular weight PEs (UHMW-PEs) and low molecular weight PEs (LMW-PEs). Via changing the processing history, i.e. melt/solution crystallization and drawing processes, these samples gain different morphologies, leading to different molecular dynamics. Due to the long chain nature, the molecular dynamics of polyethylene can be distinguished in local fluctuation and long range motion. With the help of NMR these different kinds of molecular dynamics can be monitored separately. In this work the local chain dynamics in non-crystalline regions of polyethylene samples was investigated via measuring 1H-13C heteronuclear dipolar coupling and 13C chemical shift anisotropy (CSA). By analyzing the motionally averaged 1H-13C heteronuclear dipolar coupling and 13C CSA, the information about the local anisotropy and geometry of motion was obtained. Taking advantage of the big difference of the 13C T1 relaxation time in crystalline and non-crystalline regions of PEs, the 1D 13C MAS exchange experiment was used to investigate the cooperative chain motion between these regions. The different chain organizations in non-crystalline regions were used to explain the relationship between the local fluctuation and the long range motion of the samples. In a simple manner the cooperative chain motion between crystalline and non-crystalline regions of PE results in the experimentally observed diffusive behavior of PE chain. The morphological influences on the diffusion motion have been discussed. The morphological factors include lamellar thickness, chain organization in non-crystalline regions and chain entanglements. Thermodynamics of the diffusion motion in melt and solution crystallized UHMW-PEs is discussed, revealing entropy-controlled features of the chain diffusion in PE. This thermodynamic consideration explains the counterintuitive relationship between the local fluctuation and the long range motion of the samples. Using the chain diffusion coefficient, the rates of jump motion in crystals of the melt crystallized PE have been calculated. A concept of "effective" jump motion has been proposed to explain the difference between the values derived from the chain diffusion coefficients and those in literatures. The observations of this thesis give a clear demonstration of the strong relationship between the sample morphology and chain dynamics. The sample morphologies governed by the processing history lead to different spatial constraints for the molecular chains, leading to different features of the local and long range chain dynamics. The knowledge of the morphological influence on the microscopic chain motion has many implications in our understanding of the alpha-relaxation process in PE and the related phenomena such as crystal thickening, drawability of PE, the easy creep of PE fiber, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La ricerca inquadra all’interno dell’opera dell’autore, lo specifico tema della residenza. Esso costituisce il campo di applicazione del progetto di architettura, in cui più efficacemente ricercare i tratti caratteristici del metodo progettuale dell’architetto, chiave di lettura dello studio proposto. Il processo che giunge alla costituzione materiale dell’architettura, viene considerato nelle fasi in cui è scomposto, negli strumenti che adotta, negli obbiettivi che si pone, nel rapporto con i sistemi produttivi, per come affronta il tema della forma e del programma e confrontato con la vasta letteratura presente nel pensiero di alcuni autori vicini a Ignazio Gardella. Si definiscono in tal modo i tratti di una metodologia fortemente connotata dal realismo, che rende coerente una ricerca empirica e razionale, legata ad un’idea di architettura classica, di matrice illuministica e attenta alle istanze della modernità, all’interno della quale si realizza l’eteronomia linguistica che caratterizza uno dei tratti caratteristici delle architetture di Ignazio Gardella; aspetto più volte interpretato come appartenenza ai movimenti del novecento, che intersecano costantemente la lunga carriera dell’architetto. L’analisi dell’opera della residenza è condotta non per casi esemplari, ma sulla totalità dei progetti che si avvale anche di contributi inediti. Essa è intesa come percorso di ricerca personale sui processi compositivi e sull’uso del linguaggio e permette un riposizionamento della figura di Gardella, in relazione al farsi dell’architettura, della sua realizzazione e non alla volontà di assecondare stili o norme a-priori. E’ la dimensione pratica, del mestiere, quella che meglio si presta all’interpretazione dei progetti di Gardella. Le residenze dell’architetto si mostrano per la capacità di adattarsi ai vincoli del luogo, del committente, della tecnologia, attraverso la re-interpretazione formale e il trasferimento da un tema all’altro, degli elementi essenziali che mostrano attraverso la loro immagine, una precisa idea di casa e di architettura, non autoriale, ma riconoscibile e a-temporale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this work is the diffusion of turbulence in a non-turbulent flow. Such phenomenon can be found in almost every practical case of turbulent flow: all types of shear flows (wakes, jet, boundary layers) present some boundary between turbulence and the non-turbulent surround; all transients from a laminar flow to turbulence must account for turbulent diffusion; mixing of flows often involve the injection of a turbulent solution in a non-turbulent fluid. The mechanism of what Phillips defined as “the erosion by turbulence of the underlying non-turbulent flow”, is called entrainment. It is usually considered to operate on two scales with different mechanics. The small scale nibbling, which is the entrainment of fluid by viscous diffusion of turbulence, and the large scale engulfment, which entraps large volume of flow to be “digested” subsequently by viscous diffusion. The exact role of each of them in the overall entrainment rate is still not well understood, as it is the interplay between these two mechanics of diffusion. It is anyway accepted that the entrainment rate scales with large properties of the flow, while is not understood how the large scale inertial behavior can affect an intrinsically viscous phenomenon as diffusion of vorticity. In the present work we will address then the problem of turbulent diffusion through pseudo-spectral DNS simulations of the interface between a volume of decaying turbulence and quiescent flow. Such simulations will give us first hand measures of velocity, vorticity and strains fields at the interface; moreover the framework of unforced decaying turbulence will permit to study both spatial and temporal evolution of such fields. The analysis will evidence that for this kind of flows the overall production of enstrophy , i.e. the square of vorticity omega^2 , is dominated near the interface by the local inertial transport of “fresh vorticity” coming from the turbulent flow. Viscous diffusion instead plays a major role in enstrophy production in the outbound of the interface, where the nibbling process is dominant. The data from our simulation seems to confirm the theory of an inertially stirred viscous phenomenon proposed by others authors before and provides new data about the inertial diffusion of turbulence across the interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro cerca di valutare il possibile ruolo della Comunità Energetica del Sud Est Europa quale fattore di stabilita’ nell’area Balcanica. Il Trattato fondativo della Comunita’ assegna a questa l’obiettivo di condurre una cooperazione in campo energetico al fine diffondere istituzioni e normative condivise, quali elementi di superamento del conflitto: tuttavia, sono molti gli ostacoli posti su questo cammino sia di natura interna alla regione che esterna, per l’influenza di fattori e poteri internazionali interessati all’area. Il processo di transizione in molti dei paesi del quadrante non e’ ancora concluso e molti sono i nodi politici successivi ai processi di disgregazione della Federazione Jugoslava ancora presenti e non risolti. I progetti di corridoi energetici portati avanti dall’Unione Europea, Stati Uniti e Russia, concentrano sui Balcani un interesse sempre alto e tali attenzioni potrebbero influire sui processi d’area e sulle scelte politiche da compiersi. Sullo sfondo di tutto cio’ un altro importante fattore contribuisce alle dinamiche in corso: la crisi economica ha fatto sentire la sua presenza anche nella regione balcanica e questo crea importanti squilibri che devono essere valutati alla luce di processi di cooperazione quale quello della Comunita’ Energetica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The meaning of a place has been commonly assigned to the quality of having root (rootedness) or sense of belonging to that setting. While on the contrary, people are nowadays more concerned with the possibilities of free moving and networks of communication. So, the meaning, as well as the materiality of architecture has been dramatically altered with these forces. It is therefore of significance to explore and redefine the sense and the trend of architecture at the age of flow. In this dissertation, initially, we review the gradually changing concept of "place-non-place" and its underlying technological basis. Then we portray the transformation of meaning of architecture as influenced by media and information technology and advanced methods of mobility, in the dawn of 21st century. Against such backdrop, there is a need to sort and analyze architectural practices in response to the triplet of place-non-place and space of flow, which we plan to achieve conclusively. We also trace the concept of flow in the process of formation and transformation of old cities. As a brilliant case study, we look at Persian Bazaar from a socio-architectural point of view. In other word, based on Robert Putnam's theory of social capital, we link social context of the Bazaar with architectural configuration of cities. That is how we believe "cities as flow" are not necessarily a new paradigm.