942 resultados para Machine-tools - numerical control
Resumo:
The postharvest phase has been considered an environment very suitable for successful application of biological control agents (BCAs). However, the tri-interaction between fungal pathogen, host (fruit) and antagonist is influenced by several parameters such as temperature, oxidative stresses, oxygen composition, water activity, etc. that could be determining for the success of biocontrol. Knowledge of the modes of action of BCAs is essential in order to enhance their viability and increase their potentialities in disease control. The thesis focused on the possibility to explain the modes of action of a biological control agent (BCA): Aureobasidium pullulans, in particular the strains L1 and L8, control effective against fruit postharvest fungal pathogen. In particular in this work were studied the different modes of action of BCA, such as: i) the ability to produce volatile organic compounds (VOCs), identified by SPME- gas chromatography-mass spectrometry (GC-MS) and tested by in vitro and in vivo assays against Penicillium spp., Botrytis cinerea, Colletotrichum acutatum; ii) the ability to produce lytic enzymes (exo and endo chitinase and β-1,3-glucanase) tested against Monilinia laxa, causal agent of brown rot of stone fruits. L1 and L8 lytic enzymes were also evaluated through their relative genes by molecular tools; iii) the competition for space and nutrients, such as sugars (sucrose, glucose and fructose) and iron; the latter induced the production of siderophores, molecules with high affinity for iron chelation. A molecular investigation was carried out to better understand the gene regulation strictly correlated to the production of these chelating molucules. The competition for space against M. laxa was verified by electron microscopy techniques; iv) a depth bibliographical analysis on BCAs mechanisms of action and their possible combination with physical and chemical treatments was conducted.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
Recent studies found that soil-atmosphere coupling features, through soil moisture, have been crucial to simulate well heat waves amplitude, duration and intensity. Moreover, it was found that soil moisture depletion both in Winter and Spring anticipates strong heat waves during the Summer. Irrigation in geophysical studies can be intended as an anthropogenic forcing to the soil-moisture, besides changes in land proprieties. In this study, the irrigation was add to a LAM hydrostatic model (BOLAM) and coupled with the soil. The response of the model to irrigation perturbation is analyzed during a dry Summer season. To identify a dry Summer, with overall positive temperature anomalies, an extensive climatological characterization of 2015 was done. The method included a statistical validation on the reference period distribution used to calculate the anomalies. Drought conditions were observed during Summer 2015 and previous seasons, both on the analyzed region and the Alps. Moreover July was characterized as an extreme event for the referred distribution. The numerical simulation consisted on the summer season of 2015 and two run: a control run (CTR), with the soil coupling and a perturbed run (IPR). The perturbation consists on a mask of land use created from the Cropland FAO dataset, where an irrigation water flux of 3 mm/day was applied from 6 A.M. to 9 A.M. every day. The results show that differences between CTR and IPR has a strong daily cycle. The main modifications are on the air masses proprieties, not on to the dynamics. However, changes in the circulation at the boundaries of the Po Valley are observed, and a diagnostic spatial correlation of variable differences shows that soil moisture perturbation explains well the variation observed in the 2 meters height temperature and in the latent heat fluxes.On the other hand, does not explain the spatial shift up and downslope observed during different periods of the day. Given the results, irrigation process affects the atmospheric proprieties on a larger scale than the irrigation, therefore it is important in daily forecast, particularly during hot and dry periods.
Resumo:
Investigation uses simulation to explore the inherent tradeoffs ofcontrolling high-speed and highly robust walking robots while minimizing energy consumption. Using a novel controller which optimizes robustness, energy economy, and speed of a simulated robot on rough terrain, the user can adjust their priorities between these three outcome measures and systematically generate a performance curveassessing the tradeoffs associated with these metrics.
Resumo:
The spatio-temporal control of gene expression is fundamental to elucidate cell proliferation and deregulation phenomena in living systems. Novel approaches based on light-sensitive multiprotein complexes have recently been devised, showing promising perspectives for the noninvasive and reversible modulation of the DNA-transcriptional activity in vivo. This has lately been demonstrated in a striking way through the generation of the artificial protein construct light-oxygen-voltage (LOV)-tryptophan-activated protein (TAP), in which the LOV-2-Jα photoswitch of phototropin1 from Avena sativa (AsLOV2-Jα) has been ligated to the tryptophan-repressor (TrpR) protein from Escherichia coli. Although tremendous progress has been achieved on the generation of such protein constructs, a detailed understanding of their functioning as opto-genetical tools is still in its infancy. Here, we elucidate the early stages of the light-induced regulatory mechanism of LOV-TAP at the molecular level, using the noninvasive molecular dynamics simulation technique. More specifically, we find that Cys450-FMN-adduct formation in the AsLOV2-Jα-binding pocket after photoexcitation induces the cleavage of the peripheral Jα-helix from the LOV core, causing a change of its polarity and electrostatic attraction of the photoswitch onto the DNA surface. This goes along with the flexibilization through unfolding of a hairpin-like helix-loop-helix region interlinking the AsLOV2-Jα- and TrpR-domains, ultimately enabling the condensation of LOV-TAP onto the DNA surface. By contrast, in the dark state the AsLOV2-Jα photoswitch remains inactive and exerts a repulsive electrostatic force on the DNA surface. This leads to a distortion of the hairpin region, which finally relieves its tension by causing the disruption of LOV-TAP from the DNA.
Resumo:
OBJECTIVE: The purpose of this study was to adapt and improve a minimally invasive two-step postmortem angiographic technique for use on human cadavers. Detailed mapping of the entire vascular system is almost impossible with conventional autopsy tools. The technique described should be valuable in the diagnosis of vascular abnormalities. MATERIALS AND METHODS: Postmortem perfusion with an oily liquid is established with a circulation machine. An oily contrast agent is introduced as a bolus injection, and radiographic imaging is performed. In this pilot study, the upper or lower extremities of four human cadavers were perfused. In two cases, the vascular system of a lower extremity was visualized with anterograde perfusion of the arteries. In the other two cases, in which the suspected cause of death was drug intoxication, the veins of an upper extremity were visualized with retrograde perfusion of the venous system. RESULTS: In each case, the vascular system was visualized up to the level of the small supplying and draining vessels. In three of the four cases, vascular abnormalities were found. In one instance, a venous injection mark engendered by the self-administration of drugs was rendered visible by exudation of the contrast agent. In the other two cases, occlusion of the arteries and veins was apparent. CONCLUSION: The method described is readily applicable to human cadavers. After establishment of postmortem perfusion with paraffin oil and injection of the oily contrast agent, the vascular system can be investigated in detail and vascular abnormalities rendered visible.
Resumo:
Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.
Resumo:
This dissertation presents an effective quasi one-dimensional (1-D) computational simulation tool and a full two-dimensional (2-D) computational simulation methodology for steady annular/stratified internal condensing flows of pure vapor. These simulation tools are used to investigate internal condensing flows in both gravity as well as shear driven environments. Through accurate numerical simulations of the full two dimensional governing equations, results for laminar/laminar condensing flows inside mm-scale ducts are presented. The methodology has been developed using MATLAB/COMSOL platform and is currently capable of simulating film-wise condensation for steady (and unsteady flows). Moreover, a novel 1-D solution technique, capable of simulating condensing flows inside rectangular and circular ducts with different thermal boundary conditions is also presented. The results obtained from the 2-D scientific tool and 1-D engineering tool, are validated and synthesized with experimental results for gravity dominated flows inside vertical tube and inclined channel; and, also, for shear/pressure driven flows inside horizontal channels. Furthermore, these simulation tools are employed to demonstrate key differences of physics between gravity dominated and shear/pressure driven flows. A transition map that distinguishes shear driven, gravity driven, and “mixed” driven flow zones within the non-dimensional parameter space that govern these duct flows is presented along with the film thickness and heat transfer correlations that are valid in these zones. It has also been shown that internal condensing flows in a micro-meter scale duct experiences shear driven flow, even in different gravitational environments. The full 2-D steady computational tool has been employed to investigate the length of annularity. The result for a shear driven flow in a horizontal channel shows that in absence of any noise or pressure fluctuation at the inlet, the onset of non-annularity is partly due to insufficient shear at the liquid-vapor interface. This result is being further corroborated/investigated by R. R. Naik with the help of the unsteady simulation tool. The condensing flow results and flow physics understanding developed through these simulation tools will be instrumental in reliable design of modern micro-scale and spacebased thermal systems.
Resumo:
To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.
Resumo:
This doctoral thesis presents the computational work and synthesis with experiments for internal (tube and channel geometries) as well as external (flow of a pure vapor over a horizontal plate) condensing flows. The computational work obtains accurate numerical simulations of the full two dimensional governing equations for steady and unsteady condensing flows in gravity/0g environments. This doctoral work investigates flow features, flow regimes, attainability issues, stability issues, and responses to boundary fluctuations for condensing flows in different flow situations. This research finds new features of unsteady solutions of condensing flows; reveals interesting differences in gravity and shear driven situations; and discovers novel boundary condition sensitivities of shear driven internal condensing flows. Synthesis of computational and experimental results presented here for gravity driven in-tube flows lays framework for the future two-phase component analysis in any thermal system. It is shown for both gravity and shear driven internal condensing flows that steady governing equations have unique solutions for given inlet pressure, given inlet vapor mass flow rate, and fixed cooling method for condensing surface. But unsteady equations of shear driven internal condensing flows can yield different “quasi-steady” solutions based on different specifications of exit pressure (equivalently exit mass flow rate) concurrent to the inlet pressure specification. This thesis presents a novel categorization of internal condensing flows based on their sensitivity to concurrently applied boundary (inlet and exit) conditions. The computational investigations of an external shear driven flow of vapor condensing over a horizontal plate show limits of applicability of the analytical solution. Simulations for this external condensing flow discuss its stability issues and throw light on flow regime transitions because of ever-present bottom wall vibrations. It is identified that laminar to turbulent transition for these flows can get affected by ever present bottom wall vibrations. Detailed investigations of dynamic stability analysis of this shear driven external condensing flow result in the introduction of a new variable, which characterizes the ratio of strength of the underlying stabilizing attractor to that of destabilizing vibrations. Besides development of CFD tools and computational algorithms, direct application of research done for this thesis is in effective prediction and design of two-phase components in thermal systems used in different applications. Some of the important internal condensing flow results about sensitivities to boundary fluctuations are also expected to be applicable to flow boiling phenomenon. Novel flow sensitivities discovered through this research, if employed effectively after system level analysis, will result in the development of better control strategies in ground and space based two-phase thermal systems.
Resumo:
The accuracy of simulating the aerodynamics and structural properties of the blades is crucial in the wind-turbine technology. Hence the models used to implement these features need to be very precise and their level of detailing needs to be high. With the variety of blade designs being developed the models should be versatile enough to adapt to the changes required by every design. We are going to implement a combination of numerical models which are associated with the structural and the aerodynamic part of the simulation using the computational power of a parallel HPC cluster. The structural part models the heterogeneous internal structure of the beam based on a novel implementation of the Generalized Timoshenko Beam Model Technique.. Using this technique the 3-D structure of the blade is reduced into a 1-D beam which is asymptotically equivalent. This reduces the computational cost of the model without compromising its accuracy. This structural model interacts with the Flow model which is a modified version of the Blade Element Momentum Theory. The modified version of the BEM accounts for the large deflections of the blade and also considers the pre-defined structure of the blade. The coning, sweeping of the blade, tilt of the nacelle and the twist of the sections along the blade length are all computed by the model which aren’t considered in the classical BEM theory. Each of these two models provides feedback to the other and the interactive computations lead to more accurate outputs. We successfully implemented the computational models to analyze and simulate the structural and aerodynamic aspects of the blades. The interactive nature of these models and their ability to recompute data using the feedback from each other makes this code more efficient than the commercial codes available. In this thesis we start off with the verification of these models by testing it on the well-known benchmark blade for the NREL-5MW Reference Wind Turbine, an alternative fixed-speed stall-controlled blade design proposed by Delft University, and a novel alternative design that we proposed for a variable-speed stall-controlled turbine, which offers the potential for more uniform power control and improved annual energy production.. To optimize the power output of the stall-controlled blade we modify the existing designs and study their behavior using the aforementioned aero elastic model.
Resumo:
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even back-in-time debuggers do not help answer the question, ``Where did this object come from?'' The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
Resumo:
Bovine besnoitiosis is considered an emerging chronic and debilitating disease in Europe. Many infections remain subclinical, and the only sign of disease is the presence of parasitic cysts in the sclera and conjunctiva. Serological tests are useful for detecting asymptomatic cattle/sub-clinical infections for control purposes, as there are no effective drugs or vaccines. For this purpose, diagnostic tools need to be further standardized. Thus, the aim of this study was to compare the serological tests available in Europe in a multi-centred study. A coded panel of 241 well-characterized sera from infected and non-infected bovines was provided by all participants (SALUVET-Madrid, FLI-Wusterhausen, ENV-Toulouse, IPB-Berne). The tests evaluated were as follows: an in-house ELISA, three commercial ELISAs (INGEZIM BES 12.BES.K1 INGENASA, PrioCHECK Besnoitia Ab V2.0, ID Screen Besnoitia indirect IDVET), two IFATs and seven Western blot tests (tachyzoite and bradyzoite extracts under reducing and non-reducing conditions). Two different definitions of a gold standard were used: (i) the result of the majority of tests ('Majority of tests') and (ii) the majority of test results plus pre-test information based on clinical signs ('Majority of tests plus pre-test info'). Relative to the gold standard 'Majority of tests', almost 100% sensitivity (Se) and specificity (Sp) were obtained with SALUVET-Madrid and FLI-Wusterhausen tachyzoite- and bradyzoite-based Western blot tests under non-reducing conditions. On the ELISAs, PrioCHECK Besnoitia Ab V2.0 showed 100% Se and 98.8% Sp, whereas ID Screen Besnoitia indirect IDVET showed 97.2% Se and 100% Sp. The in-house ELISA and INGEZIM BES 12.BES.K1 INGENASA showed 97.3% and 97.2% Se; and 94.6% and 93.0% Sp, respectively. IFAT FLI-Wusterhausen performed better than IFAT SALUVET-Madrid, with 100% Se and 95.4% Sp. Relative to the gold standard 'Majority of test plus pre-test info', Sp significantly decreased; this result was expected because of the existence of seronegative animals with clinical signs. All ELISAs performed very well and could be used in epidemiological studies; however, Western blot tests performed better and could be employed as a posteriori tests for control purposes in the case of uncertain results from valuable samples.
Resumo:
Gap junctions between neurons form the structural substrate for electrical synapses. Connexin 36 (Cx36, and its non-mammalian ortholog connexin 35) is the major neuronal gap junction protein in the central nervous system (CNS), and contributes to several important neuronal functions including neuronal synchronization, signal averaging, network oscillations, and motor learning. Connexin 36 is strongly expressed in the retina, where it is an obligatory component of the high-sensitivity rod photoreceptor pathway. A fundamental requirement of the retina is to adapt to broadly varying inputs in order to maintain a dynamic range of signaling output. Modulation of the strength of electrical coupling between networks of retinal neurons, including the Cx36-coupled AII amacrine cell in the primary rod circuit, is a hallmark of retinal luminance adaptation. However, very little is known about the mechanisms regulating dynamic modulation of Cx36-mediated coupling. The primary goal of this work was to understand how cellular signaling mechanisms regulate coupling through Cx36 gap junctions. We began by developing and characterizing phospho-specific antibodies against key regulatory phosphorylation sites on Cx36. Using these tools we showed that phosphorylation of Cx35 in fish models varies with light adaptation state, and is modulated by acute changes in background illumination. We next turned our focus to the well-studied and readily identifiable AII amacrine cell in mammalian retina. Using this model we showed that increased phosphorylation of Cx36 is directly related to increased coupling through these gap junctions, and that the dopamine-stimulated uncoupling of the AII network is mediated by dephosphorylation of Cx36 via protein kinase A-stimulated protein phosphatase 2A activity. We then showed that increased phosphorylation of Cx36 on the AII amacrine network is driven by depolarization of presynaptic ON-type bipolar cells as well as background light increments. This increase in phosphorylation is mediated by activation of extrasynaptic NMDA receptors associated with Cx36 gap junctions on AII amacrine cells and by Ca2+-calmodulin-dependent protein kinase II activation. Finally, these studies indicated that coupling is regulated locally at individual gap junction plaques. This work provides a framework for future study of regulation of Cx36-mediated coupling, in which increased phosphorylation of Cx36 indicates increased neuronal coupling.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.