904 resultados para SOLUTION-PHASE APPROACH
Resumo:
Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
This paper deals with the phase control for Neurospora circadian rhythm. The nonlinear control, given by tuning the parameters (considered as controlled variables) in Neurospora dynamical model, allows the circadian rhythms tracking a reference one. When there are many parameters (e.g. 3 parameters in this paper) and their values are unknown, the adaptive control law reveals its weakness since the parameters converging and control objective must be guaranteed at the same time. We show that this problem can be solved using the genetic algorithm for parameters estimation. Once the unknown parameters are known, the phase control is performed by chaos synchronization technique.
Resumo:
In recent times, a significant research effort has been focused on how deformable linear objects (DLOs) can be manipulated for real world applications such as assembly of wiring harnesses for the automotive and aerospace sector. This represents an open topic because of the difficulties in modelling accurately the behaviour of these objects and simulate a task involving their manipulation, considering a variety of different scenarios. These problems have led to the development of data-driven techniques in which machine learning techniques are exploited to obtain reliable solutions. However, this approach makes the solution difficult to be extended, since the learning must be replicated almost from scratch as the scenario changes. It follows that some model-based methodology must be introduced to generalize the results and reduce the training effort accordingly. The objective of this thesis is to develop a solution for the DLOs manipulation to assemble a wiring harness for the automotive sector based on adaptation of a base trajectory set by means of reinforcement learning methods. The idea is to create a trajectory planning software capable of solving the proposed task, reducing where possible the learning time, which is done in real time, but at the same time presenting suitable performance and reliability. The solution has been implemented on a collaborative 7-DOFs Panda robot at the Laboratory of Automation and Robotics of the University of Bologna. Experimental results are reported showing how the robot is capable of optimizing the manipulation of the DLOs gaining experience along the task repetition, but showing at the same time a high success rate from the very beginning of the learning phase.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
Super elastic nitinol (NiTi) wires were exploited as highly robust supports for three distinct crosslinked polymeric ionic liquid (PIL)-based coatings in solid-phase microextraction (SPME). The oxidation of NiTi wires in a boiling (30%w/w) H2O2 solution and subsequent derivatization in vinyltrimethoxysilane (VTMS) allowed for vinyl moieties to be appended to the surface of the support. UV-initiated on-fiber copolymerization of the vinyl-substituted NiTi support with monocationic ionic liquid (IL) monomers and dicationic IL crosslinkers produced a crosslinked PIL-based network that was covalently attached to the NiTi wire. This alteration alleviated receding of the coating from the support, which was observed for an analogous crosslinked PIL applied on unmodified NiTi wires. A series of demanding extraction conditions, including extreme pH, pre-exposure to pure organic solvents, and high temperatures, were applied to investigate the versatility and robustness of the fibers. Acceptable precision of the model analytes was obtained for all fibers under these conditions. Method validation by examining the relative recovery of a homologous group of phthalate esters (PAEs) was performed in drip-brewed coffee (maintained at 60 °C) by direct immersion SPME. Acceptable recoveries were obtained for most PAEs in the part-per-billion level, even in this exceedingly harsh and complex matrix.
Resumo:
This work encompasses a direct and coherent strategy to synthesise a molecularly imprinted polymer (MIP) capable of extracting fluconazole from its sample. The MIP was successfully prepared from methacrylic acid (functional monomer), ethyleneglycoldimethacrylate (crosslinker) and acetonitrile (porogenic solvent) in the presence of fluconazole as the template molecule through a non-covalent approach. The non-imprinted polymer (NIP) was prepared following the same synthetic scheme, but in the absence of the template. The data obtained from scanning electronic microscopy, infrared spectroscopy, thermogravimetric and nitrogen Brunauer-Emmett-Teller plot helped to elucidate the structural as well as the morphological characteristics of the MIP and NIP. The application of MIP as a sorbent was demonstrated by packing it in solid phase extraction cartridges to extract fluconazole from commercial capsule samples through an offline analytical procedure. The quantification of fluconazole was accomplished through UPLC-MS, which resulted in LOD≤1.63×10(-10) mM. Furthermore, a high percentage recovery of 91±10% (n=9) was obtained. The ability of the MIP for selective recognition of fluconazole was evaluated by comparison with the structural analogues, miconazole, tioconazole and secnidazole, resulting in percentage recoveries of 51, 35 and 32%, respectively.
Resumo:
The use of the scanning tunneling microscope (STM) for the investigation of Kondo adatoms on normal metallic surfaces reveals a Fano-Kondo behavior of the conductance as a function of the tip bias. In this work, the Doniach-Sunjic expression is used to describe the Kondo peak and we analyze the effect of a complex Fano phase, arising from an external magnetic field, on the conductance pattern. It is demonstrated that such phase generates local oscillations of the Fano-Kondo line shape and can lead to the suppression of anti-resonances.
Resumo:
Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.
Resumo:
Cloud-aerosol interaction is a key issue in the climate system, affecting the water cycle, the weather, and the total energy balance including the spatial and temporal distribution of latent heat release. Information on the vertical distribution of cloud droplet microphysics and thermodynamic phase as a function of temperature or height, can be correlated with details of the aerosol field to provide insight on how these particles are affecting cloud properties and their consequences to cloud lifetime, precipitation, water cycle, and general energy balance. Unfortunately, today's experimental methods still lack the observational tools that can characterize the true evolution of the cloud microphysical, spatial and temporal structure in the cloud droplet scale, and then link these characteristics to environmental factors and properties of the cloud condensation nuclei. Here we propose and demonstrate a new experimental approach (the cloud scanner instrument) that provides the microphysical information missed in current experiments and remote sensing options. Cloud scanner measurements can be performed from aircraft, ground, or satellite by scanning the side of the clouds from the base to the top, providing us with the unique opportunity of obtaining snapshots of the cloud droplet microphysical and thermodynamic states as a function of height and brightness temperature in clouds at several development stages. The brightness temperature profile of the cloud side can be directly associated with the thermodynamic phase of the droplets to provide information on the glaciation temperature as a function of different ambient conditions, aerosol concentration, and type. An aircraft prototype of the cloud scanner was built and flew in a field campaign in Brazil. The CLAIM-3D (3-Dimensional Cloud Aerosol Interaction Mission) satellite concept proposed here combines several techniques to simultaneously measure the vertical profile of cloud microphysics, thermodynamic phase, brightness temperature, and aerosol amount and type in the neighborhood of the clouds. The wide wavelength range, and the use of multi-angle polarization measurements proposed for this mission allow us to estimate the availability and characteristics of aerosol particles acting as cloud condensation nuclei, and their effects on the cloud microphysical structure. These results can provide unprecedented details on the response of cloud droplet microphysics to natural and anthropogenic aerosols in the size scale where the interaction really happens.
Resumo:
The electronic properties of liquid ammonia are investigated by a sequential molecular dynamics/quantum mechanics approach. Quantum mechanics calculations for the liquid phase are based on a reparametrized hybrid exchange-correlation functional that reproduces the electronic properties of ammonia clusters [(NH(3))(n); n=1-5]. For these small clusters, electron binding energies based on Green's function or electron propagator theory, coupled cluster with single, double, and perturbative triple excitations, and density functional theory (DFT) are compared. Reparametrized DFT results for the dipole moment, electron binding energies, and electronic density of states of liquid ammonia are reported. The calculated average dipole moment of liquid ammonia (2.05 +/- 0.09 D) corresponds to an increase of 27% compared to the gas phase value and it is 0.23 D above a prediction based on a polarizable model of liquid ammonia [Deng , J. Chem. Phys. 100, 7590 (1994)]. Our estimate for the ionization potential of liquid ammonia is 9.74 +/- 0.73 eV, which is approximately 1.0 eV below the gas phase value for the isolated molecule. The theoretical vertical electron affinity of liquid ammonia is predicted as 0.16 +/- 0.22 eV, in good agreement with the experimental result for the location of the bottom of the conduction band (-V(0)=0.2 eV). Vertical ionization potentials and electron affinities correlate with the total dipole moment of ammonia aggregates. (c) 2008 American Institute of Physics.
Resumo:
The phase transition of Reissner-Nordstrom AdS(4) interacting with a massive charged scalar field has been further revisited. We found exactly one stable and one unstable quasinormal mode region for the scalar field. The two of them are separated by the first marginally stable solution.
Resumo:
A novel solid phase extraction technique is described where DNA is bound and eluted from magnetic silica beads in a manner where efficiency is dependent on the magnetic manipulation of the beads and not on the flow of solution through a packed bed. The utility of this technique in the isolation of reasonably pure, PCR-amplifiable DNA from complex samples is shown by isolating DNA from whole human blood, and subsequently amplifying a fragment of the beta-globin gene. By effectively controlling the movement of the solid phase in the presence of a static sample, the issues associated with reproducibly packing a solid phase in a microchannel and maintaining consistent flow rates are eliminated. The technique described here is rapid, simple, and efficient, allowing for recovery of more than 60% of DNA from 0.6 mu L of blood at a concentration which is suitable for PCR amplification. In addition, the technique presented here requires inexpensive, common laboratory equipment, making it easily adopted for both clinical point-of-care applications and on-site forensic sample analysis.
Resumo:
Measurements based on absorption, reflectance, or luminescence of molecular species or complex ions can be carried out directly on a solid support simultaneously to the retention of the analyte. The use of this strategy in flow-based systems is advantageous in view of the reproducible handling of solutions in retention and elution steps of the analyte. This approach can be exploited to increase sensitivity, minimize reagent consumption as well as waste generation, improve selectivity or for simultaneous determination based on selective retention or differences in sorption rates of the analytes. This review focuses on the main characteristics of direct solid-phase measurements in flow systems, including the discussion of advantages and limitations and practical guidelines to the successful implementation of this approach. Selected applications in diverse fields, such as pharmaceutical, food, and environmental analysis are discussed.
Resumo:
Electrodeposition of thin copper layer was carried out on titanium wires in acidic sulphate bath. The influence of titanium surface preparation, cathodic current density, copper sulphate and sulphuric acid concentrations, electrical charge density and stirring of the solution on the adhesion of the electrodeposits was studied using the Taguchi statistical method. A L(16) orthogonal array with the six factors of control at two levels each and three interactions was employed. The analysis of variance of the mean adhesion response and signal-to-noise ratio showed the great influence of cathodic current density on adhesion. on the contrary, the other factors as well as the three investigated interactions revealed low or no significant effect. From this study optimized electrolysis conditions were defined. The copper electrocoating improved the electrical conductivity of the titanium wire. This shows that copper electrocoated titanium wires could be employed for both electrical purpose and mechanical reinforcement in superconducting magnets. (C) 2008 Elsevier B.V. All rights reserved.