18 resultados para semigroups of bounded linear operators

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phase variable expression, mediated by high frequency reversible changes in the length of simple sequence repeats, facilitates adaptation of bacterial populations to changing environments and is frequently important in bacterial virulence. Here we elucidate a novel phase variable mechanism for NadA expression, an adhesin and invasin of Neisseria meningitidis. The NadR repressor protein binds to operators flanking the phase variable tract of the nadA promoter gene and contributes to the differential expression levels of phase variant promoters with different numbers of repeats, likely due to different spacing between operators. It is shown that IHF binds between these operators, and may permit looping of the promoter, allowing interaction of NadR at operators located distally or overlapping the promoter. The 4-hydroxyphenylacetic acid, a metabolite of aromatic amino acid catabolism that is secreted in saliva, induces nadA expression by inhibiting the DNA binding activity of the NadR repressor. When induced, only minor differences are evident between NadR-independent transcription levels of promoter phase variants, which are likely due to differential RNA polymerase contacts leading to altered promoter activity. These results suggest that NadA expression is under both stochastic and tight environmental-sensing regulatory control, and both regulations are mediated by the NadR repressor that and may be induced during colonization of the oropharynx where it plays a major role in the successful adhesion and invasion of the mucosa. Hence, simple sequence repeats in promoter regions may be a strategy used by host-adapted bacterial pathogens to randomly switch between expression states that may nonetheless still be induced by appropriate niche-specific signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ferric uptake regulator protein Fur regulates iron-dependent gene expression in bacteria. In the human pathogen Helicobacter pylori, Fur has been shown to regulate iron-induced and iron-repressed genes. Herein we investigate the molecular mechanisms that control this differential iron-responsive Fur regulation. Hydroxyl radical footprinting showed that Fur has different binding architectures, which characterize distinct operator typologies. On operators recognized with higher affinity by holo-Fur, the protein binds to a continuous AT-rich stretch of about 20 bp, displaying an extended protection pattern. This is indicative of protein wrapping around the DNA helix. DNA binding interference assays with the minor groove binding drug distamycin A, point out that the recognition of the holo-operators occurs through the minor groove of the DNA. By contrast, on the apo-operators, Fur binds primarily to thymine dimers within a newly identified TCATTn10TT consensus element, indicative of Fur binding to one side of the DNA, in the major groove of the double helix. Reconstitution of the TCATTn10TT motif within a holo-operator results in a feature binding swap from an holo-Fur- to an apo-Fur-recognized operator, affecting both affinity and binding architecture of Fur, and conferring apo-Fur repression features in vivo. Size exclusion chromatography indicated that Fur is a dimer in solution. However, in the presence of divalent metal ions the protein is able to multimerize. Accordingly, apo-Fur binds DNA as a dimer in gel shift assays, while in presence of iron, higher order complexes are formed. Stoichiometric Ferguson analysis indicates that these complexes correspond to one or two Fur tetramers, each bound to an operator element. Together these data suggest that the apo- and holo-Fur repression mechanisms apparently rely on two distinctive modes of operator-recognition, involving respectively the readout of a specific nucleotide consensus motif in the major groove for apo-operators, and the recognition of AT-rich stretches in the minor groove for holo-operators, whereas the iron-responsive binding affinity is controlled through metal-dependent shaping of the protein structure in order to match preferentially the major or the minor groove.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analytical pyrolysis was used to investigate the formation of diketopiperazines (DKPs) which are cyclic dipeptides formed from the thermal degradation of proteins. A quali/quantitative procedure was developed combining microscale flash pyrolysis at 500 °C with gas chromatography-mass spectrometry (GC-MS) of DKPs trapped onto an adsorbent phase. Polar DKPs were silylated prior to GC-MS. Particular attention was paid to the identification of proline (Pro) containing DKPs due to their greater facility of formation. The GC-MS characteristics of more than 80 original and silylated DKPs were collected from the pyrolysis of sixteen linear dipeptides and four model proteins (e.g. bovine serum albumin, BSA). The structure of a novel DKP, cyclo(pyroglutamic-Pro) was established by NMR and ESI-MS analysis, while the structures of other novel DKPs remained tentative. DKPs resulted rather specific markers of amino acid sequence in proteins, even though the thermal degradation of DKPs should be taken into account. Structural information of DKPs gathered from the pyrolysis of model compounds was employed to the identification of these compounds in the pyrolysate of proteinaceous samples, including intrinsecally unfolded protein (IUP). Analysis of the liquid fraction (bio-oil) obtained from the pyrolysis of microalgae Nannochloropsis gaditana, Scenedesmus spp with a bench scale reactor showed that DKPs constituted an important pool of nitrogen-containing compounds. Conversely, the level of DKPs was rather low in the bio-oil of Botryococcus braunii. The developed micropyrolysis procedure was applied in combination with thermogravimetry (TGA) and infrared spectroscopy (FT-IR) to investigate surface interaction between BSA and synthetic chrysotile. The results showed that the thermal behavior of BSA (e.g. DKPs formation) was affected by the different form of doped synthetic chrysotile. The typical DKPs evolved from collagen were quantified in the pyrolysates of archaeological bones from Vicenne Necropolis in order to evaluate their conservation status in combination with TGA, FTIR and XRD analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we investigate the influence of dark energy on structure formation, within five different cosmological models, namely a concordance $\Lambda$CDM model, two models with dynamical dark energy, viewed as a quintessence scalar field (using a RP and a SUGRA potential form) and two extended quintessence models (EQp and EQn) where the quintessence scalar field interacts non-minimally with gravity (scalar-tensor theories). We adopted for all models the normalization of the matter power spectrum $\sigma_{8}$ to match the CMB data. For each model, we perform hydrodynamical simulations in a cosmological box of $(300 \ {\rm{Mpc}} \ h^{-1})^{3}$ including baryons and allowing for cooling and star formation. We find that, in models with dynamical dark energy, the evolving cosmological background leads to different star formation rates and different formation histories of galaxy clusters, but the baryon physics is not affected in a relevant way. We investigate several proxies for the cluster mass function based on X-ray observables like temperature, luminosity, $M_{gas}$, and $Y_{X}$. We confirm that the overall baryon fraction is almost independent of the dark energy models within few percentage points. The same is true for the gas fraction. This evidence reinforces the use of galaxy clusters as cosmological probe of the matter and energy content of the Universe. We also study the $c-M$ relation in the different cosmological scenarios, using both dark matter only and hydrodynamical simulations. We find that the normalization of the $c-M$ relation is directly linked to $\sigma_{8}$ and the evolution of the density perturbations for $\Lambda$CDM, RP and SUGRA, while for EQp and EQn it depends also on the evolution of the linear density contrast. These differences in the $c-M$ relation provide another way to use galaxy clusters to constrain the underlying cosmology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was focused on the investigation of the linear optical properties of novel two photon absorbers for biomedical applications. Substituted imidazole and imidazopyridine derivatives, and organic dendrimers were studied as potential fluorophores for two photon bioimaging. The results obtained showed superior luminescence properties for sulphonamido imidazole derivatives compared to other substituted imidazoles. Imidazo[1,2-a]pyridines exhibited an important dependence on the substitution pattern of their luminescence properties. Substitution at imidazole ring led to a higher fluorescence yield than the substitution at the pyridine one. Bis-imidazo[1,2-a]pyridines of Donor-Acceptor-Donor type were examined. Bis-imidazo[1,2-a]pyridines dimerized at C3 position had better luminescence properties than those dimerized at C5, displaying high emission yields and important 2PA cross sections. Phosphazene-based dendrimers with fluorene branches and cationic charges on the periphery were also examined. Due to aggregation phenomena in polar solvents, the dendrimers registered a significant loss of luminescence with respect to fluorene chromophore model. An improved design of more rigid chromophores yields enhanced luminescence properties which, connected to large 2PA cross-sections, make this compounds valuable as fluorophores in bioimaging. The photophysical study of several ketocoumarine initiators, designed for the fabrication of small dimension prostheses by two photon polymerization (2PP) was carried out. The compounds showed low emission yields, indicative of a high population of the triplet excited state, which is the active state in producing the reactive species. Their efficiency in 2PP was proved by fabrication of microstructures and their biocompatibility was tested in the collaborator’s laboratory. In the frame of the 2PA photorelease of drugs, three fluorene-based dyads have been investigated. They were designed to release the gamma-aminobutyric acid via two photon induced electron transfer. The experimental data in polar solvents showed a fast electron transfer followed by an almost equally fast back electron transfer process, which indicate a poor optimization of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with the analytic study of dynamics of Multi--Rotor Unmanned Aerial Vehicles. It is conceived to give a set of mathematical instruments apt to the theoretical study and design of these flying machines. The entire work is organized in analogy with classical academic texts about airplane flight dynamics. First, the non--linear equations of motion are defined and all the external actions are modeled, with particular attention to rotors aerodynamics. All the equations are provided in a form, and with personal expedients, to be directly exploitable in a simulation environment. This has requited an answer to questions like the trim of such mathematical systems. All the treatment is developed aiming at the description of different multi--rotor configurations. Then, the linearized equations of motion are derived. The computation of the stability and control derivatives of the linear model is carried out. The study of static and dynamic stability characteristics is, thus, addressed, showing the influence of the various geometric and aerodynamic parameters of the machine and in particular of the rotors. All the theoretic results are finally utilized in two interesting cases. One concerns the design of control systems for attitude stabilization. The linear model permits the tuning of linear controllers gains and the non--linear model allows the numerical testing. The other case is the study of the performances of an innovative configuration of quad--rotor aircraft. With the non--linear model the feasibility of maneuvers impossible for a traditional quad--rotor is assessed. The linear model is applied to the controllability analysis of such an aircraft in case of actuator block.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is the investigation of the Mode-I fracture mechanics parameters of quasi-brittle materials to shed light onto the influence of the width and size of the specimen on the fracture response of notched beams. To further the knowledge on the fracture process, 3D digital image correlation (DIC) was employed. A new method is proposed to determine experimentally the critical value of the crack opening, which is then used to determine the size of the fracture process zone (FPZ). In addition, the Mode-I fracture mechanics parameters are compared with the Mode-II interfacial properties of composites materials that feature as matrices the quasi-brittle materials studied in Mode-I conditions. To investigate the Mode II fracture parameters, single-lap direct shear tests are performed. Notched concrete beams with six cross-sections has been tested using a three-point bending (TPB) test set-up (Mode-I fracture mechanics). Two depths and three widths of the beam are considered. In addition to concrete beams, alkali-activated mortar beams (AAMs) that differ by the type and size of the aggregates have been tested using the same TPB set-up. Two dimensions of AAMs are considered. The load-deflection response obtained from DIC is compared with the load-deflection response obtained from the readings of two linear variable displacement transformers (LVDT). Load responses, peak loads, strain profiles along the ligament from DIC, fracture energy and failure modes of TPB tests are discussed. The Mode-II problem is investigated by testing steel reinforced grout (SRG) composites bonded to masonry and concrete elements under single-lap direct shear tests. Two types of anchorage systems are proposed for SRG reinforced masonry and concrete element to study their effectiveness. An indirect method is proposed to find the interfacial properties, compare them with the Mode-I fracture properties of the matrix and to model the effect of the anchorage.