906 resultados para Autoregressive-Moving Average model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientists predict that global agricultural lands will expand over the next few decades due to increasing demands for food production and an exponential increase in crop-based biofuel production. These changes in land use will greatly impact biogeochemical and biogeophysical cycles across the globe. It is therefore important to develop models that can accurately simulate the interactions between the atmosphere and important crops. In this study, we develop and validate a new process-based sugarcane model (included as a module within the Agro-IBIS dynamic agro-ecosystem model) which can be applied at multiple spatial scales. At site level, the model systematically under/overestimated the daily sensible/latent heat flux (by -10.5% and 14.8%, H and E, respectively) when compared against the micrometeorological observations from southeast Brazil. The model underestimated ET (relative bias between -10.1% and 12.5%) when compared against an agro-meteorological field experiment from northeast Australia. At the regional level, the model accurately simulated average yield for the four largest mesoregions (clusters of municipalities) in the state of Sao Paulo, Brazil, over a period of 16 years, with a yield relative bias of -0.68% to 1.08%. Finally, the simulated annual average sugarcane yield over 31 years for the state of Louisiana (US) had a low relative bias (-2.67%), but exhibited a lower interannual variability than the observed yields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addressed the problem of water-demand forecasting for real-time operation of water supply systems. The present study was conducted to identify the best fit model using hourly consumption data from the water supply system of Araraquara, Sa approximate to o Paulo, Brazil. Artificial neural networks (ANNs) were used in view of their enhanced capability to match or even improve on the regression model forecasts. The ANNs used were the multilayer perceptron with the back-propagation algorithm (MLP-BP), the dynamic neural network (DAN2), and two hybrid ANNs. The hybrid models used the error produced by the Fourier series forecasting as input to the MLP-BP and DAN2, called ANN-H and DAN2-H, respectively. The tested inputs for the neural network were selected literature and correlation analysis. The results from the hybrid models were promising, DAN2 performing better than the tested MLP-BP models. DAN2-H, identified as the best model, produced a mean absolute error (MAE) of 3.3 L/s and 2.8 L/s for training and test set, respectively, for the prediction of the next hour, which represented about 12% of the average consumption. The best forecasting model for the next 24 hours was again DAN2-H, which outperformed other compared models, and produced a MAE of 3.1 L/s and 3.0 L/s for training and test set respectively, which represented about 12% of average consumption. DOI: 10.1061/(ASCE)WR.1943-5452.0000177. (C) 2012 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural network model to predict ozone concentration in the Sao Paulo Metropolitan Area was developed, based on average values of meteorological variables in the morning (8:00-12:00 hr) and afternoon (13:00-17: 00 hr) periods. Outputs are the maximum and average ozone concentrations in the afternoon (12:00-17:00 hr). The correlation coefficient between computed and measured values was 0.82 and 0.88 for the maximum and average ozone concentration, respectively. The model presented good performance as a prediction tool for the maximum ozone concentration. For prediction periods from 1 to 5 days 0 to 23% failures (95% confidence) were obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Magnetic hyperthermia is currently a clinical therapy approved in the European Union for treatment of tumor cells, and uses magnetic nanoparticles (MNPs) under time-varying magnetic fields (TVMFs). The same basic principle seems promising against trypanosomatids causing Chagas disease and sleeping sickness, given that the therapeutic drugs available have severe side effects and that there are drug-resistant strains. However, no applications of this strategy against protozoan-induced diseases have been reported so far. In the present study, Crithidia fasciculata, a widely used model for therapeutic strategies against pathogenic trypanosomatids, was targeted with Fe3O4 MNPs in order to provoke cell death remotely using TVMFs. Methods: Iron oxide MNPs with average diameters of approximately 30 nm were synthesized by precipitation of FeSO4 in basic medium. The MNPs were added to C. fasciculata choanomastigotes in the exponential phase and incubated overnight, removing excess MNPs using a DEAE-cellulose resin column. The amount of MNPs uploaded per cell was determined by magnetic measurement. The cells bearing MNPs were submitted to TVMFs using a homemade AC field applicator (f = 249 kHz, H = 13 kA/m), and the temperature variation during the experiments was measured. Scanning electron microscopy was used to assess morphological changes after the TVMF experiments. Cell viability was analyzed using an MTT colorimetric assay and flow cytometry. Results: MNPs were incorporated into the cells, with no noticeable cytotoxicity. When a TVMF was applied to cells bearing MNPs, massive cell death was induced via a nonapoptotic mechanism. No effects were observed by applying TVMF to control cells not loaded with MNPs. No macroscopic rise in temperature was observed in the extracellular medium during the experiments. Conclusion: As a proof of principle, these data indicate that intracellular hyperthermia is a suitable technology to induce death of protozoan parasites bearing MNPs. These findings expand the possibilities for new therapeutic strategies combating parasitic infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to model variations in test-day milk yields of first lactations of Holstein cows by RR using B-spline functions and Bayesian inference in order to fit adequate and parsimonious models for the estimation of genetic parameters. They used 152,145 test day milk yield records from 7317 first lactations of Holstein cows. The model established in this study was additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. Authors modeled the average lactation curve of the population with a fourth-order orthogonal Legendre polynomial. They concluded that a cubic B-spline with seven random regression coefficients for both the additive genetic and permanent environment effects was to be the best according to residual mean square and residual variance estimates. Moreover they urged a lower order model (quadratic B-spline with seven random regression coefficients for both random effects) could be adopted because it yielded practically the same genetic parameter estimates with parsimony. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The criteria for organ sharing has developed a system that prioritizes liver transplantation (LT) for patients with hepatocellular carcinoma (HCC) who have the highest risk of wait-list mortality. In some countries this model allows patients only within the Milan Criteria (MC, defined by the presence of a single nodule up to 5 cm, up to three nodules none larger than 3 cm, with no evidence of extrahepatic spread or macrovascular invasion) to be evaluated for liver transplantation. This police implies that some patients with HCC slightly more advanced than those allowed by the current strict selection criteria will be excluded, even though LT for these patients might be associated with acceptable long-term outcomes. Methods We propose a mathematical approach to study the consequences of relaxing the MC for patients with HCC that do not comply with the current rules for inclusion in the transplantation candidate list. We consider overall 5-years survival rates compatible with the ones reported in the literature. We calculate the best strategy that would minimize the total mortality of the affected population, that is, the total number of people in both groups of HCC patients that die after 5 years of the implementation of the strategy, either by post-transplantation death or by death due to the basic HCC. We illustrate the above analysis with a simulation of a theoretical population of 1,500 HCC patients with tumor size exponentially. The parameter λ obtained from the literature was equal to 0.3. As the total number of patients in these real samples was 327 patients, this implied in an average size of 3.3 cm and a 95% confidence interval of [2.9; 3.7]. The total number of available livers to be grafted was assumed to be 500. Results With 1500 patients in the waiting list and 500 grafts available we simulated the total number of deaths in both transplanted and non-transplanted HCC patients after 5 years as a function of the tumor size of transplanted patients. The total number of deaths drops down monotonically with tumor size, reaching a minimum at size equals to 7 cm, increasing from thereafter. With tumor size equals to 10 cm the total mortality is equal to the 5 cm threshold of the Milan criteria. Conclusion We concluded that it is possible to include patients with tumor size up to 10 cm without increasing the total mortality of this population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of tendons for the transmission of the forces and the movements in robotic devices has been investigated from several researchers all over the world. The interest in this kind of actuation modality is based on the possibility of optimizing the position of the actuators with respect to the moving part of the robot, in the reduced weight, high reliability, simplicity in the mechanic design and, finally, in the reduced cost of the resulting kinematic chain. After a brief discussion about the benefits that the use of tendons can introduce in the motion control of a robotic device, the design and control aspects of the UB Hand 3 anthropomorphic robotic hand are presented. In particular, the tendon-sheaths transmission system adopted in the UB Hand 3 is analyzed and the problem of force control and friction compensation is taken into account. The implementation of a tendon based antagonistic actuated robotic arm is then investigated. With this kind of actuation modality, and by using transmission elements with nonlinear force/compression characteristic, it is possible to achieve simultaneous stiffness and position control, improving in this way the safety of the device during the operation in unknown environments and in the case of interaction with other robots or with humans. The problem of modeling and control of this type of robotic devices is then considered and the stability analysis of proposed controller is reported. At the end, some tools for the realtime simulation of dynamic systems are presented. This realtime simulation environment has been developed with the aim of improving the reliability of the realtime control applications both for rapid prototyping of controllers and as teaching tools for the automatic control courses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis was to study at a microscopic level different liquid crystal (LC) systems, in order to determine their physical properties, resorting to two distinct methodologies, one involving computer simulations, and the other spectroscopic techniques, in particular electron spin resonance (ESR) spectroscopy. By means of the computer simulation approach we tried to demonstrate this tool effectiveness for calculating anisotropic static properties of a LC material, as well as for predicting its behaviour and features. This required the development and adoption of suitable molecular models based on a convenient intermolecular potentials reflecting the essential molecular features of the investigated system. In particular, concerning the simulation approach, we have set up models for discotic liquid crystal dimers and we have studied, by means of Monte Carlo simulations, their phase behaviour and self­-assembling properties, with respect to the simple monomer case. Each discotic dimer is described by two oblate Gay­Berne ellipsoids connected by a flexible spacer, modelled by a harmonic "spring" of three different lengths. In particular we investigated the effects of dimerization on the transition temperatures, as well as on the characteristics of molecular aggregation displayed and the relative orientational order. Moving to the experimental results, among the many experimental techniques that are typically employed to evaluate LC system distinctive features, ESR has proved to be a powerful tool in microscopic scale investigation of the properties, structure, order and dynamics of these materials. We have taken advantage of the high sensitivity of the ESR spin probe technique to investigate increasingly complex LC systems ranging from devices constituted by a polymer matrix in which LC molecules are confined in shape of nano- droplets, as well as biaxial liquid crystalline elastomers, and dimers whose monomeric units or lateral groups are constituted by rod-like mesogens (11BCB). Reflection-mode holographic-polymer dispersed liquid crystals (H-PDLCs) are devices in which LCs are confined into nanosized (50­-300 nm) droplets, arranged in layers which alternate with polymer layers, forming a diffraction grating. We have determined the configuration of the LC local director and we have derived a model of the nanodroplet organization inside the layers. Resorting also to additional information on the nanodroplet size and shape distribution provided by SEM images of the H-PDLC cross-section, the observed director configuration has been modeled as a bidimensional distribution of elongated nanodroplets whose long axis is, on the average, parallel to the layers and whose internal director configuration is a uniaxial quasi- monodomain aligned along the nanodroplet long axis. The results suggest that the molecular organization is dictated mainly by the confinement, explaining, at least in part, the need for switching voltages significantly higher and the observed faster turn-off times in H-PDLCs compared to standard PDLC devices. Liquid crystal elastomers consist in cross-linked polymers, in which mesogens represent the monomers constituting the main chain or the laterally attached side groups. They bring together three important aspects: orientational order in amorphous soft materials, responsive molecular shape and quenched topological constraints. In biaxial nematic liquid crystalline elastomers (BLCEs), two orthogonal directions, rather than the one of normal uniaxial nematic, can be controlled, greatly enhancing their potential value for applications as novel actuators. Two versions of a side-chain BLCEs were characterized: side­-on and end­-on. Many tests have been carried out on both types of LCE, the main features detected being the lack of a significant dynamical behaviour, together with a strong permanent alignment along the principal director, and the confirmation of the transition temperatures already determined by DSC measurements. The end­-on sample demonstrates a less hindered rotation of the side group mesogenic units and a greater freedom of alignment to the magnetic field, as already shown by previous NMR studies. Biaxial nematic ESR static spectra were also obtained on the basis of Molecular Dynamics generated biaxial configurations, to be compared to the experimentally determined ones, as a mean to establish a possible relation between biaxiality and the spectral features. This provides a concrete example of the advantages of combining the computer simulation and spectroscopic approaches. Finally, the dimer α,ω-bis(4'-cyanobiphenyl-4-yl)undecane (11BCB), synthesized in the "quest" for the biaxial nematic phase has been analysed. Its importance lies in the dimer significance as building blocks in the development of new materials to be employed in innovative technological applications, such as faster switching displays, resorting to the easier aligning ability of the secondary director in biaxial phases. A preliminary series of tests were performed revealing the population of mesogenic molecules as divided into two groups: one of elongated straightened conformers sharing a common director, and one of bent molecules, which display no order, being equally distributed in the three dimensions. Employing this model, the calculated values show a consistent trend, confirming at the same time the transition temperatures indicated by the DSC measurements, together with rotational diffusion tensor values that follow closely those of the constituting monomer 5CB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurden Struktur-Eigenschaftsbeziehungen des konjugierten Modell-Polymers MEH-PPV untersucht. Dazu wurde Fällungs-fraktionierung eingesetzt, um MEH-PPV mit unterschiedlichem Molekulargewicht (Mw) zu erhalten, insbesondere MEH-PPV mit niedrigem Mw, da dieses für optische Wellenleiterbauelemente optimal geeignet ist Wir konnten feststellen, dass die Präparation einer ausreichenden Menge von MEH-PPV mit niedrigem Mw und geringer Mw-Verteilung wesentlich von der geeigneten Wahl des Lösungsmittels und der Temperatur während der Zugabe des Fällungsmittels abhängt. Alternativ dazu wurden UV-induzierte Kettenspaltungseffekte untersucht. Wir folgern aus dem Vergleich beider Vorgehensweisen, dass die Fällungsfraktionierung verglichen mit der UV-Behandlung besser geeignet ist zur Herstellung von MEH-PPV mit spezifischem Mw, da das UV-Licht Kettendefekte längs des Polymerrückgrats erzeugt. 1H NMR and FTIR Spektroskopie wurden zur Untersuchung dieser Kettendefekte herangezogen. Wir konnten außerdem beobachten, dass die Wellenlängen der Absorptionsmaxima der MEH-PPV Fraktionen mit der Kettenlänge zunehmen bis die Zahl der Wiederholeinheiten n  110 erreicht ist. Dieser Wert ist signifikant größer als früher berichtet. rnOptische Eigenschaften von MEH-PPV Wellenleitern wurden untersucht und es konnte gezeigt werden, dass sich die optischen Konstanten ausgezeichnet reproduzieren lassen. Wir haben die Einflüsse der Lösungsmittel und Temperatur beim Spincoaten auf Schichtdicke, Oberflächenrauigkeit, Brechungsindex, Doppelbrechung und Wellenleiter-Dämpfungsverlust untersucht. Wir fanden, dass mit der Erhöhung der Siedetemperatur der Lösungsmittel die Schichtdicke und die Rauigkeit kleiner werden, während Brechungsindex, Doppelbrechung sowie Wellenleiter-Dämpfungsverluste zunahmen. Wir schließen daraus, dass hohe Siedetemperaturen der Lösungsmittel niedrige Verdampfungsraten erzeugen, was die Aggregatbildung während des Spincoatings begünstigt. Hingegen bewirkt eine erhöhte Temperatur während der Schichtpräparation eine Erhöhung von Schichtdicke und Rauhigkeit. Jedoch nehmen Brechungsindex und der Doppelbrechung dabei ab.rn Für die Schichtpräparation auf Glassubstraten und Quarzglas-Fasern kam das Dip-Coating Verfahren zum Einsatz. Die Schichtdicke der Filme hängt ab von Konzentration der Lösung, Transfergeschwindigkeit und Immersionszeit. Mit Tauchbeschichtung haben wir Schichten von MEH-PPV auf Flaschen-Mikroresonatoren aufgebracht zur Untersuchung von rein-optischen Schaltprozessen. Dieses Verfahren erweist sich insbesondere für MEH-PPV mit niedrigem Mw als vielversprechend für die rein-optische Signalverarbeitung mit großer Bandbreite.rn Zusätzlich wurde auch die Morphologie dünner Schichten aus anderen PPV-Derivaten mit Hilfe von FTIR Spektroskopie untersucht. Wir konnten herausfinden, dass der Alkyl-Substitutionsgrad einen starken Einfluss auf die mittlere Orientierung der Polymerrückgrate in dünnen Filmen hat.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on the design and characterization of a novel, artificial minimal model membrane system with chosen physical parameters to mimic a nanoparticle uptake process driven exclusively by adhesion and softness of the bilayer. The realization is based on polymersomes composed of poly(dimethylsiloxane)-b-poly(2-methyloxazoline) (PMDS-b-PMOXA) and nanoscopic colloidal particles (polystyrene, silica), and the utilization of powerful characterization techniques. rnPDMS-b-PMOXA polymersomes with a radius, Rh ~100 nm, a size polydispersity, PD = 1.1 and a membrane thickness, h = 16 nm, were prepared using the film rehydratation method. Due to the suitable mechanical properties (Young’s modulus of ~17 MPa and a bending modulus of ~7⋅10-8 J) along with the long-term stability and the modifiability, these kind of polymersomes can be used as model membranes to study physical and physicochemical aspects of transmembrane transport of nanoparticles. A combination of photon (PCS) and fluorescence (FCS) correlation spectroscopies optimizes species selectivity, necessary for a unique internalization study encompassing two main efforts. rnFor the proof of concepts, the first effort focused on the interaction of nanoparticles (Rh NP SiO2 = 14 nm, Rh NP PS = 16 nm; cNP = 0.1 gL-1) and polymersomes (Rh P = 112 nm; cP = 0.045 gL-1) with fixed size and concentration. Identification of a modified form factor of the polymersome entities, selectively seen in the PCS experiment, enabled a precise monitor and quantitative description of the incorporation process. Combining PCS and FCS led to the estimation of the incorporated particles per polymersome (about 8 in the examined system) and the development of an appropriate methodology for the kinetics and dynamics of the internalization process. rnThe second effort aimed at the establishment of the necessary phenomenology to facilitate comparison with theories. The size and concentration of the nanoparticles were chosen as the most important system variables (Rh NP = 14 - 57 nm; cNP = 0.05 - 0.2 gL-1). It was revealed that the incorporation process could be controlled to a significant extent by changing the nanoparticles size and concentration. Average number of 7 up to 11 NPs with Rh NP = 14 nm and 3 up to 6 NPs with Rh NP = 25 nm can be internalized into the present polymersomes by changing initial nanoparticles concentration in the range 0.1- 0.2 gL-1. Rapid internalization of the particles by polymersomes is observed only above a critical threshold particles concentration, dependent on the nanoparticle size. rnWith regard possible pathways for the particle uptake, cryogenic transmission electron microscopy (cryo-TEM) has revealed two different incorporation mechanisms depending on the size of the involved nanoparticles: cooperative incorporation of nanoparticles groups or single nanoparticles incorporation. Conditions for nanoparticle uptake and controlled filling of polymersomes were presented. rnIn the framework of this thesis, the experimental observation of transmembrane transport of spherical PS and SiO2 NPs into polymersomes via an internalization process was reported and examined quantitatively for the first time. rnIn a summary the work performed in frames of this thesis might have significant impact on cell model systems’ development and thus improved understanding of transmembrane transport processes. The present experimental findings help create the missing phenomenology necessary for a detailed understanding of a phenomenon with great relevance in transmembrane transport. The fact that transmembrane transport of nanoparticles can be performed by artificial model system without any additional stimuli has a fundamental impact on the understanding, not only of the nanoparticle invagination process but also of the interaction of nanoparticles with biological as well as polymeric membranes. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tra le plurime conseguenze dell’avvento del digitale, la riarticolazione dei rapporti tra immagine statica e immagine in movimento è certamente una delle più profonde. Sintomatica dei cambiamenti in atto sia nei film studies sia nella storia dell’arte, tale riarticolazione richiede un ripensamento dei confini disciplinari tradizionali entro cui il cinema e la fotografia sono stati affrontati come oggetti di studio separati e distinti. Nell’adottare un approccio molteplice, volto a comprendere prospettive provenienti dalla New Film History e dalla media archaeology, dalla teoria dell’arte e dagli studi visuali, questo lavoro esplora l’esistenza di una relazione dialettica tra il cinema e la fotografia intesa in modo duplice: come tensione costitutiva tra due media indissolubilmente connessi – non tanto in considerazione di un medesimo principio realistico di rappresentazione quanto, piuttosto, in virtù di uno scambio incessante nella modellizzazione di categorie quali il tempo, il movimento, l’immobilità, l’istante, la durata; come istanza peculiare della pratica artistica contemporanea, paradigma di riferimento nella produzione estetica di immagini. La tesi si suddivide in tre capitoli. Il primo si concentra sul rapporto tra l’immobilità e il movimento dell’immagine come cifra in grado di connettere l’estetica delle attrazioni e la cronofotografia a una serie di esperienze filmiche e artistiche prodotte nei territori delle avanguardie. Il secondo capitolo considera l’emergenza, dagli anni Novanta, di pratiche artistiche in cui l’incontro intermediale tra film e fotografia fornisce modelli di analisi volti all’indagine dell’attuale condizione estetica e tecnologica. Il terzo offre una panoramica critica su un caso di studio, la GIF art. La GIF è un formato digitale obsoleto che consente di produrre immagini che appaiono, simultaneamente, come fisse e animate; nel presente lavoro, la GIF è discussa come un medium capace di contraddire i confini attraverso cui concepiamo l’immagine fissa e in movimento, suggerendo, inoltre, un possibile modello di pensiero storico-cronologico anti-lineare.