888 resultados para power to extend time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic resonance imaging (MRI) is today precluded to patients bearing active implantable medical devices AIMDs). The great advantages related to this diagnostic modality, together with the increasing number of people benefiting from implantable devices, in particular pacemakers(PM)and carioverter/defibrillators (ICD), is prompting the scientific community the study the possibility to extend MRI also to implanted patients. The MRI induced specific absorption rate (SAR) and the consequent heating of biological tissues is one of the major concerns that makes patients bearing metallic structures contraindicated for MRI scans. To date, both in-vivo and in-vitro studies have demonstrated the potentially dangerous temperature increase caused by the radiofrequency (RF) field generated during MRI procedures in the tissues surrounding thin metallic implants. On the other side, the technical evolution of MRI scanners and of AIMDs together with published data on the lack of adverse events have reopened the interest in this field and suggest that, under given conditions, MRI can be safely performed also in implanted patients. With a better understanding of the hazards of performing MRI scans on implanted patients as well as the development of MRI safe devices, we may soon enter an era where the ability of this imaging modality may be more widely used to assist in the appropriate diagnosis of patients with devices. In this study both experimental measures and numerical analysis were performed. Aim of the study is to systematically investigate the effects of the MRI RF filed on implantable devices and to identify the elements that play a major role in the induced heating. Furthermore, we aimed at developing a realistic numerical model able to simulate the interactions between an RF coil for MRI and biological tissues implanted with a PM, and to predict the induced SAR as a function of the particular path of the PM lead. The methods developed and validated during the PhD program led to the design of an experimental framework for the accurate measure of PM lead heating induced by MRI systems. In addition, numerical models based on Finite-Differences Time-Domain (FDTD) simulations were validated to obtain a general tool for investigating the large number of parameters and factors involved in this complex phenomenon. The results obtained demonstrated that the MRI induced heating on metallic implants is a real risk that represents a contraindication in extending MRI scans also to patient bearing a PM, an ICD, or other thin metallic objects. On the other side, both experimental data and numerical results show that, under particular conditions, MRI procedures might be consider reasonably safe also for an implanted patient. The complexity and the large number of variables involved, make difficult to define a unique set of such conditions: when the benefits of a MRI investigation cannot be obtained using other imaging techniques, the possibility to perform the scan should not be immediately excluded, but some considerations are always needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present dissertation focuses on burnout and work engagement among teachers, with especial focus on the Job-Demands Resources Model: Chapter 1 focuses on teacher burnout. It aims to investigate the role of efficacy beliefs using negatively worded inefficacy items instead of positive ones and to establish whether depersonalization and cynism can be considered two different dimensions of the teacher burnout syndrome. Chapter 2 investigates the factorial validity of the instruments used to measure work engagement (i.e. Utrecht Work Engagement Scale, UWES-17 and UWES-9). Moreover, because the current study is partly longitudinal in nature, also the stability across time of engagement can be investigated. Finally, based on cluster-analyses, two groups that differ in levels of engagement are compared as far as their job- and personal resources (i.e. possibilities for personal development, work-life balance, and self-efficacy), positive organizational attitudes and behaviours (i.e., job satisfaction and organizational citizenship behaviour) and perceived health are concerned. Chapter 3 tests the JD-R model in a longitudinal way, by integrating also the role of personal resources (i.e. self-efficacy). This chapter seeks answers to questions on what are the most important job demands, job and personal resources contributing to discriminate burned-out teachers from non-burned-out teachers, as well as engaged teachers from non-engaged teachers. Chapter 4 uses a diary study to extend knowledge about the dynamic nature of the JD-R model by considering between- and within-person variations with regard to both motivational and health impairment processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigates phenomena of vortex dynamics in type II superconductors depending on the dimensionality of the flux-line system and the strength of the driving force. In the low dissipative regime of Bi_2Sr_2CaCu_2O_{8+delta} (BSCCO) the influence of oxygen stoichiometry on flux-line tension was examined. An entanglement crossover of the vortex system at low magnetic fields was identified and a comprehensive B-T phase diagram of solid and fluid phases derived.In YBa_2Cu_3O_7 (YBCO) extremely long (>100 mm) high-quality measurement bridges allowed to extend the electric-field window in transport measurements by up to three orders of magnitude. Complementing analyses of the data conclusively produced dynamic exponents of the glass transition z~9 considerably higher than theoretically predicted and previously reported. In high-dissipative measurements a voltage instability appearing in the current-voltage characteristics of type II superconductors was observed for the first time in BSCCO and shown to result from a Larkin-Ovchinnikov flux-flow vortex instability under the influence of quasi-particle heating. However, in an analogous investigation of YBCO the instability was found to appear only in the temperature and magnetic-field regime of the vortex-glass state. Rapid-pulse measurements fully confirmed this correlation of vortex glass and instability in YBCO and revealed a constant rise time (~µs).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research undertakes to provide a typology of multipolar systems. Multipolarity plays a key role in IR theory, for it is strictly associated with the history of European politics since the seventeenth century to the end of World War Two. Despite wide investigation, one can doubt the matter has received a definitive treatment. Trouble is that current studies often consider multipolarity as a one-dimensional concept. They obviously reckon that multipolarism is substantially different from other systems and deserves attention, but generally fail to distinguish between different types of multipolar systems (the few exceptions are listed in chapter one). The history of international politics tells us a different story. Multipolar power systems may share some general characteristics, but they also show a wide array of difference, and understanding this difference requires a preliminary work of classification. That is the purpose of the present study. The work is organized as follows. In chapter one, we provide a cursory review of the literature on multipolarity, with particular reference to the work of Duncan Snidal and Joseph Grieco. Then we propose a four-cell typology of multipolar systems to be tested via historical analysis. The first type, hegemony, is best represented by European international system to the time of Napoleonic France, and is discussed in chapter two. Type number two is the traditional concert of Europe, which history is detailed in chapter three. Type number three is the reversal of alliances, which closest example, the diplomatic revolution of 1756, is discussed in chapter four. Finally, chapter five is devoted to the chain-gang system, and the European politics from Bismarck’s late years to World War One represents a good illustration of how it works. In chapter six we proceed to draw a first evaluation of the main results achieved in the previous chapters, in order to see if, and to what extent, our typology serves the purpose of explaining the nature of multipolar systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1) Background: The most common methods to evaluate clarithromycin resistance is the E-Test, but is time consuming. Resistance of Hp to clarithromycin is due to point mutations in the 23S rRNA. Eight different point mutations have been related to CH resistance, but the large majority of the clarithromycin resistance depends on three point mutations (A2142C, A2142G and A2143G). A novel PCR-based clarithromycin resistance assays, even on paraffin-embedded biopsy specimens, have been proposed. Aims: to assess clarithromycin resistance detecting these point mutation (E-Test as a reference method);secondly, to investigate relation with MIC values. Methods: Paraffin-embedded biopsies of patients Hp-positive were retrieved. The A2142C, A2142G and A2143G point mutations were detected by molecular analysis after DNA extraction by using a TaqMan real-time PCR. Results: The study enrolled 86 patients: 46 resistant and 40 sensible to CH. The Hp status was evaluated at endoscopy, by rapid urease test (RUT), histology and hp culture. According to real-time PCR, 37 specimens were susceptible to clarithromycin (wild type dna) whilst the remaining 49 specimens (57%) were resistant. A2143G is the most frequent mutation. A2142C always express a resistant phenotype and A2142G leads to a resitant phenotype only if homozigous. 2) Background: Colonoscopy work-load for endoscopy services is increasing due to colorectal cancer prevention. We tested a combination of faecal tests to improve accuracy and prioritize the access to colonoscopy. Methods: we tested a combination of fecal tests (FOBT, M2-PK and calprotectin) in a group of 280 patients requiring colonoscopy. Results: 47 patients had CRC and 85 had advanced adenoma/s at colonoscopy/histology. In case of single test, for CRC detection FOBT was the test with the highest specificity and PPV, M2-PK had the highest sensitivity and higher NPV. Combination was more interesting in term of PPV. And the best combination of tests was i-FOBT + M2-PK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osmotic Dehydration and Vacuum Impregnation are interesting operations in the food industry with applications in minimal fruit processing and/or freezing, allowing to develop new products with specific innovative characteristics. Osmotic dehydration is widely used for the partial removal of water from cellular tissue by immersion in hypertonic (osmotic) solution. The driving force for the diffusion of water from the tissue is provided by the differences in water chemical potential between the external solution and the internal liquid phase of the cells. Vacuum Impregnation of porous products immersed in a liquid phase consist of reduction of pressure in a solid-liquid system (vacuum step) followed by the restoration of atmospheric pressure (atmospheric step). During the vacuum step the internal gas in the product pores is expanded and partially flows out while during the atmospheric step, there is a compression of residual gas and the external liquid flows into the pores (Fito, 1994). This process is also a very useful unit operation in food engineering as it allows to introduce specific solutes in the tissue which can play different functions (antioxidants, pH regulators, preservatives, cryoprotectants etc.). The present study attempts to enhance our understanding and knowledge of fruit as living organism, interacting dynamically with the environment, and to explore metabolic, structural, physico-chemical changes during fruit processing. The use of innovative approaches and/or technologies such as SAFES (Systematic Approach to Food Engineering System), LF-NMR (Low Frequency Nuclear Magnetic Resonance), GASMAS (Gas in Scattering Media Absorption Spectroscopy) are very promising to deeply study these phenomena. SAFES methodology was applied in order to study irreversibility of the structural changes of kiwifruit during short time of osmotic treatment. The results showed that the deformed tissue can recover its initial state 300 min after osmotic dehydration at 25 °C. The LF-NMR resulted very useful in water status and compartmentalization study, permitting to separate observation of three different water population presented in vacuole, cytoplasm plus extracellular space and cell wall. GASMAS techniques was able to study the pressure equilibration after Vacuum Impregnation showing that after restoration of atmospheric pressure in the solid-liquid system, there was a reminding internal low pressure in the apple tissue that slowly increases until reaching the atmospheric pressure, in a time scale that depends on the vacuum applied during the vacuum step. The physiological response of apple tissue on Vacuum Impregnation process was studied indicating the possibility of vesicular transport within the cells. Finally, the possibility to extend the freezing tolerance of strawberry fruits impregnated with cryoprotectants was proven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Adaptive Optics is the measurement and correction in real time of the wavefront aberration of the star light caused by the atmospheric turbulence, that limits the angular resolution of ground based telescopes and thus their capabilities to deep explore faint and crowded astronomical objects. The lack of natural stars enough bright to be used as reference sources for the Adaptive Optics, over a relevant fraction of the sky, led to the introduction of artificial reference stars. The so-called Laser Guide Stars are produced by exciting the Sodium atoms in a layer laying at 90km of altitude, by a powerful laser beam projected toward the sky. The possibility to turn on a reference star close to the scientific targets of interest has the drawback in an increased difficulty in the wavefront measuring, mainly due to the time instability of the Sodium layer density. These issues are increased with the telescope diameter. In view of the construction of the 42m diameter European Extremely Large Telescope a detailed investigation of the achievable performances of Adaptive Optics becomes mandatory to exploit its unique angular resolution . The goal of this Thesis was to present a complete description of a laboratory Prototype development simulating a Shack-Hartmann wavefront sensor using Laser Guide Stars as references, in the expected conditions for a 42m telescope. From the conceptual design, through the opto-mechanical design, to the Assembly, Integration and Test, all the phases of the Prototype construction are explained. The tests carried out shown the reliability of the images produced by the Prototype that agreed with the numerical simulations. For this reason some possible upgrades regarding the opto-mechanical design are presented, to extend the system functionalities and let the Prototype become a more complete test bench to simulate the performances and drive the future Adaptive Optics modules design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the analysis to reconstruct the transverse momentum p_{t} spectra for pions, kaons and protons identified with the TOF detector of the ALICE experiment in pp Minimum Bias collisions at $\sqrt{s}=7$ TeV was reported. After a detailed description of all the parameters which influence the TOF PID performance (time resolution, calibration, alignment, matching efficiency, time-zero of the event) the method used to identify the particles, the unfolding procedure, was discussed. With this method, thanks also to the excellent TOF performance, the pion and kaon spectra can be reconstructed in the 0.5tons can be measured in the interval 0.8To prove the robustness of these results, a comparison with the spectra obtained with a $3\sigma$ cut PID procedure, was reported, showing an agreement within 5%. The estimation of the systematic uncertainties was described. The reported spectra provide very useful information to tune the Monte Carlo generators that, as was shown, are not able to describe $\pi$, $K$ and $p$ production over the full momentum range. The same limitation for the theoretical models in describing the data was observed when comparing with the Monte Carlo predictions the $K/\pi$ and $p/\pi$ ratios, as obtained with the TOF analysis. Finally, the comparison between the TOF results and the spectra obtained with analyses that use other ALICE PID detectors and techniques to extend the identified spectra to a wider $p_{t}$ range was reported, showing an agreement within 6\%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies of polycyclic aromatic hydrocarbons have shown that the overall size, periphery, and functionalization of PAHs are crucial parameters which significantly alter their electronic structure and chemical reactivity. Therefore, the major direction of this thesis is the synthesis and characterization of extended PAHs: (i) with different functional groups improving their processability, (ii) with different periphery changing their chemical reactivity, (iii) with inclusions of different metal ions, which influence their physical properties. • The cyclodehydrogenation reaction has been proposed for to synthesise polyphenylene ribbons with preplanarized (dibenzo[e,l]pyrene) moieties in the aromatic core with up to 10 nm linear size. The synthetic strategy employed is discussed in Chapter 2 and is based on stoichiometrically controlled DIELS-ALDER cycloaddition. All molecules possessed very good solubility in common organic solvents allowing their characterization by standard analytical techniques. • A new concept was developed to extend PAH’s core. Here the introduction of “zigzag” sites, discussed in Chpater 3 was shown to lower the HOMO-LUMO gap and to result in higher chemical reactivities. This allowed, in Chapters 3, 4 and 5, further functionalization of PAH and enlargement of their aromatic cores up to 224 atoms. Despite the size of these novel molecules, extraordinary solubilities in common organic solvents were obtained due to distortions from planarity of the aromatic cores by bulky tert-butyl groups, which hampered the usually very pronounced aggregation tendency of extended π-systems. All extended PAHs posses the small HOMO-LUMO gap together with good electron affinities making them potential candidates for application in organic FETs. • Another alternative synthetic route has been proposed to obtain extended the metal-PAH complexes. Using the quinoxaline methodology in Chapter 5 three new phenanthroline ligands (up to 60 skeletal atoms) have been synthesized and characterized. Four different (Ru(II), Cu(II) and Pt(II)) complexes were synthesized, allowing to construct a range of large metal complexes by varying the metal as well as the number and nature of ligands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerosi studi mostrano che gli intervalli temporali sono rappresentati attraverso un codice spaziale che si estende da sinistra verso destra, dove gli intervalli brevi sono rappresentati a sinistra rispetto a quelli lunghi. Inoltre tale disposizione spaziale del tempo può essere influenzata dalla manipolazione dell’attenzione-spaziale. La presente tesi si inserisce nel dibattito attuale sulla relazione tra rappresentazione spaziale del tempo e attenzione-spaziale attraverso l’uso di una tecnica che modula l’attenzione-spaziale, ovvero, l’Adattamento Prismatico (AP). La prima parte è dedicata ai meccanismi sottostanti tale relazione. Abbiamo mostrato che spostando l’attenzione-spaziale con AP, verso un lato dello spazio, si ottiene una distorsione della rappresentazione di intervalli temporali, in accordo con il lato dello spostamento attenzionale. Questo avviene sia con stimoli visivi, sia con stimoli uditivi, nonostante la modalità uditiva non sia direttamente coinvolta nella procedura visuo-motoria di AP. Questo risultato ci ha suggerito che il codice spaziale utilizzato per rappresentare il tempo, è un meccanismo centrale che viene influenzato ad alti livelli della cognizione spaziale. La tesi prosegue con l’indagine delle aree corticali che mediano l’interazione spazio-tempo, attraverso metodi neuropsicologici, neurofisiologici e di neuroimmagine. In particolare abbiamo evidenziato che, le aree localizzate nell’emisfero destro, sono cruciali per l’elaborazione del tempo, mentre le aree localizzate nell’emisfero sinistro sono cruciali ai fini della procedura di AP e affinché AP abbia effetto sugli intervalli temporali. Infine, la tesi, è dedicata allo studio dei disturbi della rappresentazione spaziale del tempo. I risultati ci indicano che un deficit di attenzione-spaziale, dopo danno emisferico destro, provoca un deficit di rappresentazione spaziale del tempo, che si riflette negativamente sulla vita quotidiana dei pazienti. Particolarmente interessanti sono i risultati ottenuti mediante AP. Un trattamento con AP, efficace nel ridurre il deficit di attenzione-spaziale, riduce anche il deficit di rappresentazione spaziale del tempo, migliorando la qualità di vita dei pazienti.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The alignement and anchoring of liquid crystals on solid surfaces is a key problem for modern device technology that until now has been treated empirically, but that can now be tackled by atomistic computer simulations. Molecular dynamics (MD) simulations were used in this thesis work to study two films of 7 and 8 n-alkyl-4’cyanobiphenyl (7CB and 8CB) liquid crystals , with a thickness of 15 nm, confined between two (001) surfaces of MoS2 (molybdenite). The isotropic and nematic phases of both liquid crystals were simulated, and the resulting structures characterized structurally. A new force field was designed to model the interactions between the liquid crystal (LC) molecules and the surface of molybdenite, while an accurate force field developed previously was used to model the 7CB and 8CB molecules. The results show that the (001) molybdenite surface induces a planar orientation in both the liquid crystals. For the nematic phase of 8CB, one of the two solid/LC interfaces is composed of a first layer of molecules aligned parallel to the surface, followed by a second layer of molecules aligned perpendicular to the surface (also called, homeotropic). The effect of the surface appears to be local in nature as it is confined to the first 15 Angström of the LC film. Conversely, for the nematic phase of 7CB, a planar ordering is established into the LC film. The LC molecules at the interface with the molybdenite appear to align preferentially their alkyl chains toward the solid substrate. The resulting tilt angle of molecules was found to be in good agreement with experimental measurements available in literature. Despite the fact that the MD simulations spanned a time range of more than 100 ns, the nematic phases of both 7CB and 8CB were found not to be completely formed. In order to confirm the findings presented in this thesis, we propose to extend the current study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mit der Zielsetzung der vorliegenden Arbeit wurde die detailierten Analyse von Migrationsdynamiken epithelilaler Monolayer anhand zweier neuartiger in vitro Biosensoren verfolgt, der elektrischen Zell-Substrat Impedanz Spektroskopie (electrical cell-substrate impedance sensing, ECIS) sowie der Quarz Kristall Mikrowaage (quartz crystal microbalance, QCM). Beide Methoden erwiesen sich als sensitiv gegenüber der Zellmotilität und der Nanozytotoxizität.rnInnerhalb des ersten Projektes wurde ein Fingerprinting von Krebszellen anhand ihrer Motilitätsdynamiken und der daraus generierten elektrischen oder akkustischen Fluktuationen auf ECIS oder QCM Basis vorgenommen; diese Echtzeitsensoren wurdene mit Hilfe klassicher in vitro Boyden-Kammer Migrations- und Invasions-assays validiert. Fluktuationssignaturen, also Langzeitkorrelationen oder fraktale Selbstähnlichkeit aufgrund der kollektiven Zellbewegung, wurden über Varianz-, Fourier- sowie trendbereinigende Fluktuationsanalyse quantifiziert. Stochastische Langzeitgedächtnisphänomene erwiesen sich als maßgebliche Beiträge zur Antwort adhärenter Zellen auf den QCM und ECIS-Sensoren. Des weiteren wurde der Einfluss niedermolekularer Toxine auf die Zytoslelettdynamiken verfolgt: die Auswirkungen von Cytochalasin D, Phalloidin und Blebbistatin sowie Taxol, Nocodazol und Colchicin wurden dabei über die QCM und ECIS Fluktuationsanalyse erfasst.rnIn einem zweiten Projektschwerpunkt wurden Adhäsionsprozesse sowie Zell-Zell und Zell-Substrat Degradationsprozesse bei Nanopartikelgabe charackterisiert, um ein Maß für Nanozytotoxizität in Abhangigkeit der Form, Funktionalisierung Stabilität oder Ladung der Partikel zu erhalten.rnAls Schlussfolgerung ist zu nennen, dass die neuartigen Echtzeit-Biosensoren QCM und ECIS eine hohe Zellspezifität besitzen, auf Zytoskelettdynamiken reagieren sowie als sensitive Detektoren für die Zellvitalität fungieren können.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente lavoro si propone principalmente di fornire un’analisi delle declinazioni assunte dal principio di continuità nel diritto amministrativo, tentando di metterne in luce al contempo le basi fondanti che caratterizzano ogni principio generale e le sfumature più attuali emerse dall’elaborazione della dottrina e della giurisprudenza più recenti. Partendo dal fondamentale presupposto secondo cui la maggior parte degli interpreti si è interessata al principio di continuità in campo amministrativo con prevalente riferimento all’ambito organizzativo-strutturale, si è tentato di estendere l’analisi sino a riconoscervi una manifestazione di principi chiave della funzione amministrativa complessivamente intesa quali efficienza, buon andamento, realizzazione di buoni risultati. La rilevanza centrale della continuità discende dalla sua infinita declinabilità, ma in questo lavoro si insiste particolarmente sul fatto che di essa possono darsi due fondamentali interpretazioni, tra loro fortemente connesse, che si influenzano reciprocamente: a quella che la intende come segno di stabilità perenne, capace di assicurare certezza sul modus operandi delle pubbliche amministrazioni e tutela degli affidamenti da esse ingenerati, si affianca una seconda visione che ne privilegia invece l’aspetto dinamico, interpretandola come il criterio che impone alla P.A. di assecondare la realtà che muta, evolvendo contestualmente ad essa, al fine di assicurare la permanenza del risultato utile per la collettività, in ossequio alla sua missione di cura. In questa prospettiva, il presente lavoro si propone di analizzare, nella sua prima parte, i risultati già raggiunti dall’elaborazione esegetica in materia di continuità amministrativa, con particolare riferimento alle sue manifestazioni nel campo dell’organizzazione e dell’attività amministrative, nonché ad alcune sue espressioni concrete nel settore degli appalti e dei servizi pubblici. La seconda parte è invece dedicata a fornire alcuni spunti ed ipotesi per nuove interpretazioni del principio in chiave sistematica, in relazione a concetti generali quali il tempo, lo spazio e il complessivo disegno progettuale della funzione amministrativa.