639 resultados para novelty inventive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The full exploitation of multi-hop multi-path connectivity opportunities offered by heterogeneous wireless interfaces could enable innovative Always Best Served (ABS) deployment scenarios where mobile clients dynamically self-organize to offer/exploit Internet connectivity at best. Only novel middleware solutions based on heterogeneous context information can seamlessly enable this scenario: middleware solutions should i) provide a translucent access to low-level components, to achieve both fully aware and simplified pre-configured interactions, ii) permit to fully exploit communication interface capabilities, i.e., not only getting but also providing connectivity in a peer-to-peer fashion, thus relieving final users and application developers from the burden of directly managing wireless interface heterogeneity, and iii) consider user mobility as crucial context information evaluating at provision time the suitability of available Internet points of access differently when the mobile client is still or in motion. The novelty of this research work resides in three primary points. First of all, it proposes a novel model and taxonomy providing a common vocabulary to easily describe and position solutions in the area of context-aware autonomic management of preferred network opportunities. Secondly, it presents PoSIM, a context-aware middleware for the synergic exploitation and control of heterogeneous positioning systems that facilitates the development and portability of location-based services. PoSIM is translucent, i.e., it can provide application developers with differentiated visibility of data characteristics and control possibilities of available positioning solutions, thus dynamically adapting to application-specific deployment requirements and enabling cross-layer management decisions. Finally, it provides the MMHC solution for the self-organization of multi-hop multi-path heterogeneous connectivity. MMHC considers a limited set of practical indicators on node mobility and wireless network characteristics for a coarsegrained estimation of expected reliability/quality of multi-hop paths available at runtime. In particular, MMHC manages the durability/throughput-aware formation and selection of different multi-hop paths simultaneously. Furthermore, MMHC provides a novel solution based on adaptive buffers, proactively managed based on handover prediction, to support continuous services, especially by pre-fetching multimedia contents to avoid streaming interruptions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

FIR spectroscopy is an alternative way of collecting spectra of many inorganic pigments and corrosion products found on art objects, which is not normally observed in the MIR region. Most FIR spectra are traditionally collected in transmission mode but as a real novelty it is now also possible to record FIR spectra in ATR (Attenuated Total Reflectance) mode. In FIR transmission we employ polyethylene (PE) for preparation of pellets by embedding the sample in PE. Unfortunately, the preparation requires heating of the PE in order to produces at transparent pellet. This will affect compounds with low melting points, especially those with structurally incorporated water. Another option in FIR transmission is the use of thin films. We test the use of polyethylene thin film (PETF), both commercial and laboratory-made PETF. ATR collection of samples is possible in both the MIR and FIR region on solid, powdery or liquid samples. Changing from the MIR to the FIR region is easy as it simply requires the change of detector and beamsplitter (which can be performed within a few minutes). No preparation of the sample is necessary, which is a huge advantage over the PE transmission method. The most obvious difference, when comparing transmission with ATR, is the distortion of band shape (which appears asymmetrical in the lower wavenumber region) and intensity differences. However, the biggest difference can be the shift of strong absorbing bands moving to lower wavenumbers in ATR mode. The sometimes huge band shift necessitates the collection of standard library spectra in both FIR transmission and ATR modes, provided these two methods of collecting are to be employed for analyses of unknown samples. Standard samples of 150 pigment and corrosion compounds are thus collected in both FIR transmission and ATR mode in order to build up a digital library of spectra for comparison with unknown samples. XRD, XRF and Raman spectroscopy assists us in confirming the purity or impurity of our standard samples. 24 didactic test tables, with known pigment and binder painted on the surface of a limestone tablet, are used for testing the established library and different ways of collecting in ATR and transmission mode. In ATR, micro samples are scratched from the surface and examined in both the MIR and FIR region. Additionally, direct surface contact of the didactic tablets with the ATR crystal are tested together with water enhanced surface contact. In FIR transmission we compare the powder from our test tablet on the laboratory PETF and embedded in PE. We also compare the PE pellets collected using a 4x beam condenser, focusing the IR beam area from 8 mm to 2 mm. A few samples collected from a mural painting in a Nepalese temple, corrosion products collected from archaeological Chinese bronze objects and samples from a mural paintings in an Italian abbey, are examined by ATR or transmission spectroscopy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Ziele der vorliegenden Arbeit waren 1) die Entwicklung und Validierung von sensitiven und substanz-spezifischen Methoden für die quantitative Bestimmung von anionischen, nichtionischen und amphoteren Tensiden und deren Metaboliten in wässrigen Umweltproben unter Einsatz leistungsfähiger, massenspektrometrischer Analysengeräte,2) die Gewinnung von aeroben, polaren Abbauprodukten aus Tensiden in einem die realen Umweltbedingungen simulierenden Labor-Festbettbioreaktor (FBBR), dessen Biozönose oberflächenwasserbürtig war,3) zur Aufklärung des Abbaumechanismus von Tensiden neue, in 2) gewonnene Metabolite zu identifizieren und massenspektrometrisch zu charakterisieren ebenso wie den Primärabbau und den weiteren Abbau zu verfolgen,4) durch quantitative Untersuchungen von Tensiden und deren Abbauprodukten in Abwasser und Oberflächenwasser Informationen zu ihrem Eintrag und Verhalten bei unterschiedlichen hydrologischen und klimatischen Bedingungen zu erhalten,5) das Verhalten von persistenten Tensidmetaboliten in Wasserwerken, die belastetes Oberflächenwasser aufbereiten, zu untersuchen und deren Vorkommen im Trinkwasser zu bestimmen,6) mögliche Schadwirkungen von neu entdeckten Metabolite mittels ökotoxikologischer Biotests abzuschätzen,7) durch Vergleich der Felddaten mit den Ergebnissen der Laborversuche die Umweltrelevanz der Abbaustudien zu belegen. Die Auswahl der untersuchten Verbindungen erfolgte unter Berücksichtigung ihres Produktionsvolumens und der Neuheit auf dem Tensidmarkt. Sie umfasste die Waschmittelinhaltsstoffe lineare Alkylbenzol-sulfonate (LAS), welches das Tensid mit der höchsten Produktionsmenge darstellte, die beiden nichtionischen Tenside Alkylglucamide (AG) und Alkylpolyglucoside (APG), ebenso wie das amphotere Tensid Cocamidopropylbetain (CAPB). Außerdem wurde der polymere Farbübertragungsinhibitor Polyvinylpyrrolidon (PVP) untersucht.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The comparative genomic sequence analysis of a region in human chromosome 11p15.3 and its homologous segment in mouse chromosome 7 between ST5 and LMO1 genes has been performed. 158,201 bases were sequenced in the mouse and compared with the syntenic region in human, partially available in the public databases. The analysed region exhibits the typical eukaryotic genomic structure and compared with the close neighbouring regions, strikingly reflexes the mosaic pattern distribution of (G+C) and repeats content despites its relative short size. Within this region the novel gene STK33 was discovered (Stk33 in the mouse), that codes for a serine/threonine kinase. The finding of this gene constitutes an excellent example of the strength of the comparative sequencing approach. Poor gene-predictions in the mouse genomic sequence were corrected and improved by the comparison with the unordered data from the human genomic sequence publicly available. Phylogenetical analysis suggests that STK33 belongs to the calcium/calmodulin-dependent protein kinases group and seems to be a novelty in the chordate lineage. The gene, as a whole, seems to evolve under purifying selection whereas some regions appear to be under strong positive selection. Both human and mouse versions of serine/threonine kinase 33, consists of seventeen exons highly conserved in the coding regions, particularly in those coding for the core protein kinase domain. Also the exon/intron structure in the coding regions of the gene is conserved between human and mouse. The existence and functionality of the gene is supported by the presence of entries in the EST databases and was in vivo fully confirmed by isolating specific transcripts from human uterus total RNA and from several mouse tissues. Strong evidence for alternative splicing was found, which may result in tissue-specific starting points of transcription and in some extent, different protein N-termini. RT-PCR and hybridisation experiments suggest that STK33/Stk33 is differentially expressed in a few tissues and in relative low levels. STK33 has been shown to be reproducibly down-regulated in tumor tissues, particularly in ovarian tumors. RNA in-situ hybridisation experiments using mouse Stk33-specific probes showed expression in dividing cells from lung and germinal epithelium and possibly also in macrophages from kidney and lungs. Preliminary experimentation with antibodies designed in this work, performed in parallel to the preparation of this manuscript, seems to confirm this expression pattern. The fact that the chromosomal region 11p15 in which STK33 is located may be associated with several human diseases including tumor development, suggest further investigation is necessary to establish the role of STK33 in human health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work of the present thesis is focused on the implementation of microelectronic voltage sensing devices, with the purpose of transmitting and extracting analog information between devices of different nature at short distances or upon contact. Initally, chip-to-chip communication has been studied, and circuitry for 3D capacitive coupling has been implemented. Such circuits allow the communication between dies fabricated in different technologies. Due to their novelty, they are not standardized and currently not supported by standard CAD tools. In order to overcome such burden, a novel approach for the characterization of such communicating links has been proposed. This results in shorter design times and increased accuracy. Communication between an integrated circuit (IC) and a probe card has been extensively studied as well. Today wafer probing is a costly test procedure with many drawbacks, which could be overcome by a different communication approach such as capacitive coupling. For this reason wireless wafer probing has been investigated as an alternative approach to standard on-contact wafer probing. Interfaces between integrated circuits and biological systems have also been investigated. Active electrodes for simultaneous electroencephalography (EEG) and electrical impedance tomography (EIT) have been implemented for the first time in a 0.35 um process. Number of wires has been minimized by sharing the analog outputs and supply on a single wire, thus implementing electrodes that require only 4 wires for their operation. Minimization of wires reduces the cable weight and thus limits the patient's discomfort. The physical channel for communication between an IC and a biological medium is represented by the electrode itself. As this is a very crucial point for biopotential acquisitions, large efforts have been carried in order to investigate the different electrode technologies and geometries and an electromagnetic model is presented in order to characterize the properties of the electrode to skin interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit beschäftigt sich mit der Hydrophobisierung anorganischer Nanopartikel für die Herstellung von Nanokompositen. Aufgrund der großen, reaktiven Oberfläche neigen Nanopartikel zur Aggregation, besonders in hydrophoben Medien. Literaturbekannte Verfahren der nachträglichen Modifizierung bereits existierender Partikeln führen nur teilweise zu gut redispergierbaren Partikeln in hydrophoben Medien. Da die Hülle erst nach der Partikelbildung erzeugt wird, läßt sich die Entstehung von Primäraggregaten nicht vermeiden. Die Neuheit der in dieser Arbeit angewandten Methode ist die Bildung der Partikelhülle vor der Entstehung der Partikel. Die Fällung der Nanopartikel innerhalb wäßriger Emulsionströpfchen schließt eine vorzeitige Aggregation der Partikel aus. Eine große Anzahl unterschiedlicher anorganischer Nanopartikel wurde hergestellt, deren Größe durch Variation der Syntheseparameter beeinflußt werden konnte. Ferner war es möglich, eine breite Variationsmöglichkeit der Art der Partikelhülle darzustellen, die sich als maßgeblich für die Kompatibilität zu einer Polymermatrix herausstellte. Die Kompatibilität zur Matrix ermöglichte eine einwandfreie Dispergierung von unterschiedlichen anorganischen Nanopartikeln im Kompositmaterial. Je nach Auswahl des anorganischen Materials können verschiedene Kompositeigenschaften, wie beispielsweise optische, elektrische, magnetische oder mechanische, beeinflußt werden. In dieser Arbeit wurde der Schwerpunkt auf eine erhöhte UV-Absorption gelegt, wobei sich auch eine verbesserte Schlagzähigkeit der Nanokomposite zeigte. Durch die hervorragende Dispergierung der Nanopartikel in der Matrix waren diese Nanokomposite hochtransparent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gastrointestinal stromal tumors (GISTs) are the most common mesenchymal tumors in the gastrointestinal tract. This work considers the pharmacological response in GIST patients treated with imatinib by two different angles: the genetic and somatic point of view. We analyzed polymorphisms influence on treatment outcome, keeping in consideration SNPs in genes involved in drug transport and folate pathway. Naturally, all these intriguing results cannot be considered as the only main mechanism in imatinib response. GIST mainly depends by oncogenic gain of function mutations in tyrosin kinase receptor genes, KIT or PDGFRA, and the mutational status of these two genes or acquisition of secondary mutation is considered the main player in GIST development and progression. To this purpose we analyzed the secondary mutations to better understand how these are involved in imatinib resistance. In our analysis we considered both imatinib and the second line treatment, sunitinib, in a subset of progressive patients. KIT/PDGFRA mutation analysis is an important tool for physicians, as specific mutations may guide therapeutic choices. Currently, the only adaptations in treatment strategy include imatinib starting dose of 800 mg/daily in KIT exon-9-mutated GISTs. In the attempt to individualize treatment, genetic polymorphisms represent a novelty in the definition of biomarkers of imatinib response in addition to the use of tumor genotype. Accumulating data indicate a contributing role of pharmacokinetics in imatinib efficacy, as well as initial response, time to progression and acquired resistance. At the same time it is becoming evident that genetic host factors may contribute to the observed pharmacokinetic inter-patient variability. Genetic polymorphisms in transporters and metabolism may affect the activity or stability of the encoded enzymes. Thus, integrating pharmacogenetic data of imatinib transporters and metabolizing genes, whose interplay has yet to be fully unraveled, has the potential to provide further insight into imatinib response/resistance mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Body-centric communications are emerging as a new paradigm in the panorama of personal communications. Being concerned with human behaviour, they are suitable for a wide variety of applications. The advances in the miniaturization of portable devices to be placed on or around the body, foster the diffusion of these systems, where the human body is the key element defining communication characteristics. This thesis investigates the human impact on body-centric communications under its distinctive aspects. First of all, the unique propagation environment defined by the body is described through a scenario-based channel modeling approach, according to the communication scenario considered, i.e., on- or on- to off-body. The novelty introduced pertains to the description of radio channel features accounting for multiple sources of variability at the same time. Secondly, the importance of a proper channel characterisation is shown integrating the on-body channel model in a system level simulator, allowing a more realistic comparison of different Physical and Medium Access Control layer solutions. Finally, the structure of a comprehensive simulation framework for system performance evaluation is proposed. It aims at merging in one tool, mobility and social features typical of the human being, together with the propagation aspects, in a scenario where multiple users interact sharing space and resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decades, medical malpractice has been framed as one of the most critical issues for healthcare providers and health policy, holding a central role on both the policy agenda and public debate. The Law and Economics literature has devoted much attention to medical malpractice and to the investigation of the impact of malpractice reforms. Nonetheless, some reforms have been much less empirically studied as in the case of schedules, and their effects remain highly debated. The present work seeks to contribute to the study of medical malpractice and of schedules of noneconomic damages in a civil law country with a public national health system, using Italy as case study. Besides considering schedules and exploiting a quasi-experimental setting, the novelty of our contribution consists in the inclusion of the performance of the judiciary (measured as courts’ civil backlog) in the empirical analysis. The empirical analysis is twofold. First, it investigates how limiting compensations for pain and suffering through schedules impacts on the malpractice insurance market in terms of presence of private insurers and of premiums applied. Second, it examines whether, and to what extent, healthcare providers react to the implementation of this policy in terms of both levels and composition of the medical treatments offered. Our findings show that the introduction of schedules increases the presence of insurers only in inefficient courts, while it does not produce significant effects on paid premiums. Judicial inefficiency is attractive to insurers for average values of schedules penetration of the market, with an increasing positive impact of inefficiency as the territorial coverage of schedules increases. Moreover, the implementation of schedules tends to reduce the use of defensive practices on the part of clinicians, but the magnitude of this impact is ultimately determined by the actual degree of backlog of the court implementing schedules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aims to fill the gap in the literature by examining the relationship between technological trajectories and environmental policy in the automotive industry, focusing on the role of environmental policies in unlocking the industry from fossil fuel path-dependence. It first explores the inducement mechanism that underpins the interaction between environmental policy and green technological advances, investigating under what conditions the European environmental transport policy portfolio and the intrinsic characteristics of assignees' knowledge boost worldwide green patent production. Subsequently, the thesis empirically analyses the dynamics of technological knowledge involved in technological trajectories assessing evolution patterns such as variation, selection and retention, in order to study the impact of policy implementation on technological knowledge related to electric and hybrid vehicle technologies. Finally, the thesis sheds light on the drivers that encourage a shift from incumbent internal combustion engine technologies towards low-emission vehicle technologies. This analysis tests whether tax-inclusive fuel prices and technological proximity between technological fields induce a shift from non-environmental inventions to environmentally friendly inventive activities and if they impact the competition between alternative vehicle technologies. The findings provide insights into the effectiveness of environmental policy in triggering inventive activities related to the development of alternative vehicle technologies. In addition, there is evidence that environmental policy redirects technological efforts towards a sustainable path and impacts the competition between low-emission vehicles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.