941 resultados para Active Flow Control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Phagocytosis of apoptotic cells by macrophages induces a suppressor phenotype. Previous data from our group suggested that this occurs via Platelet-activating factor receptor (PAF-R)-mediated pathways. In the present study, we investigated the impact of apoptotic cell inoculation or induction by a chemotherapeutic agent (dacarbazine, DTIC) on tumour growth, microenvironmental parameters and survival, and the effect of treatment with a PAF-R antagonist (WEB2170). These studies were performed in murine tumours: Ehrlich Ascitis Tumour (EAT) and B16F10 melanoma. Methods Tumour growth was assessed by direct counting of EAT cells in the ascitis or by measuring the volume of the solid tumour. Parameters of the tumour microenvironment, such as the frequency of cells expressing cyclo-oxygenase-2 (COX-2), caspase-3 and galectin-3, and microvascular density, were determined by immunohistochemistry. Levels of vascular endothelium growth factor (VEGF) and prostaglandin E2 (PGE2) were determined by ELISA, and levels of nitric oxide (NO) by Griess reaction. PAF-R expression was analysed by immunohistochemistry and flow cytometry. Results Inoculation of apoptotic cells before EAT implantation stimulated tumour growth. This effect was reversed by in vivo pre-treatment with WEB2170. This treatment also reduced tumour growth and modified the microenvironment by reducing PGE2, VEGF and NO production. In B16F10 melanoma, WEB2170 alone or in association with DTIC significantly reduced tumour volume. Survival of the tumour-bearing mice was not affected by WEB2170 treatment but was significantly improved by the combination of DTIC with WEB2170. Tumour microenvironment elements were among the targets of the combination therapy since the relative frequency of COX-2 and galectin-3 positive cells and the microvascular density within the tumour mass were significantly reduced by treatment with WEB2170 or DTIC alone or in combination. Antibodies to PAF-R stained the cells from inside the tumour, but not the tumour cells grown in vitro. At the tissue level, a few cells (probably macrophages) stained positively with antibodies to PAF-R. Conclusions We suggest that PAF-R-dependent pathways are activated during experimental tumour growth, modifying the microenvironment and the phenotype of the tumour macrophages in such a way as to favour tumour growth. Combination therapy with a PAF-R antagonist and a chemotherapeutic drug may represent a new and promising strategy for the treatment of some tumours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Assuming a higher risk of latent tuberculosis (TB) infection in the population of Rio de Janeiro, Brazil, in October of 1998 the TB Control Program of Clementino Fraga Filho Hospital (CFFH) routinely started to recommend a two-step tuberculin skin test (TST) in contacts of pulmonary TB cases in order to distinguish a boosting reaction due to a recall of delayed hypersensitivity previously established by infection with Mycobacterium tuberculosis (M.tb) or BCG vaccination from a tuberculin conversion. The aim of this study was to assess the prevalence of boosted tuberculin skin tests among contacts of individuals with active pulmonary tuberculosis (TB). Methods Retrospective cohort of TB contacts ≥ 12 years old who were evaluated between October 1st, 1998 and October 31st 2001. Contacts with an initial TST ≤ 4 mm were considered negative and had a second TST applied after 7–14 days. Boosting reaction was defined as a second TST ≥ 10 mm with an increase in induration ≥ 6 mm related to the first TST. All contacts with either a positive initial or repeat TST had a chest x-ray to rule out active TB disease, and initially positive contacts were offered isoniazid preventive therapy. Contacts that boosted did not receive treatment for latent TB infection and were followed for 24 months to monitor the development of TB. Statistical analysis of dichotomous variables was performed using Chi-square test. Differences were considered significant at a p < 0.05. Results Fifty four percent (572/1060) of contacts had an initial negative TST and 79% of them (455/572) had a second TST. Boosting was identified in 6% (28/455). The mean age of contacts with a boosting reaction was 42.3 ± 21.1 and with no boosting was 28.7 ± 21.7 (p = 0.01). Fifty percent (14/28) of individuals whose test boosted met criteria for TST conversion on the second TST (increase in induration ≥ 10 mm). None of the 28 contacts whose reaction boosted developed TB disease within two years following the TST. Conclusion The low number of contacts with boosting and the difficulty in distinguishing boosting from TST conversion in the second TST suggests that the strategy of two-step TST testing among contacts of active TB cases may not be useful. However, this conclusion must be taken with caution because of the small number of subjects followed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variable rate sprinklers (VRS) have been developed to promote localized water application of irrigated areas. In Precision Irrigation, VRS permits better control of flow adjustment and, at the same time, provides satisfactory radial distribution profiles for various pressures and flow rates are really necessary. The objective of this work was to evaluate the performance and radial distribution profiles of a developed VRS which varies the nozzle cross sectional area by moving a pin in or out using a stepper motor. Field tests were performed under different conditions of service pressure, rotation angles imposed on the pin and flow rate which resulted in maximal water throw radiuses ranging from 7.30 to 10.38 m. In the experiments in which the service pressure remained constant, the maximal throw radius varied from 7.96 to 8.91 m. Averages were used of repetitions performed under conditions without wind or with winds less than 1.3 m s-1. The VRS with the four stream deflector resulted in greater water application throw radius compared to the six stream deflector. However, the six stream deflector had greater precipitation intensities, as well as better distribution. Thus, selection of the deflector to be utilized should be based on project requirements, respecting the difference in the obtained results. With a small opening of the nozzle, the VRS produced small water droplets that visually presented applicability for foliar chemigation. Regarding the comparison between the estimated and observed flow rates, the stepper motor produced excellent results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Rapid maxillary expansion (RME) for the treatment of maxillary deficiency and posterior crossbite may induce changes in the vertical dimension. Expanders with occlusal splints have been developed to minimize unwanted vertical effects. OBJECTIVE: This preliminary study used cephalometri radiographs to evaluate the vertical effects of RME using a Hyrax appliance in children with maxillary deficiency. METHOD: Twenty-six patients (11 boys; mean age = 8 years and 5 months) with maxillary deficiency and posterior crossbite were treated using a Hyrax appliance with an acrylic occlusal splint. Radiographs and cephalometric studies were performed before the beginning of the treatment (T1) and after RME active time (T2), at a mean interval of 7 months. Results were compared with normative values. RESULTS AND CONCLUSIONS: At the end of treatment, there were no statistically significant changes, and measurements were similar to the normative values. Data showed that there were no significant effects on vertical growth, which suggests that appliances with occlusal splints may be used to correct transverse deficiencies regardless of the patient's growth pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the new active absorption wave basin, named Hydrodynamic Calibrator (HC), constructed at the University of São Paulo (USP), in the Laboratory facilities of the Numerical Offshore Tank (TPN). The square (14 m 14 m) tank is able to generate and absorb waves from 0.5 Hz to 2.0 Hz, by means of 148 active hinged flap wave makers. An independent mechanical system drives each flap by means of a 1HP servo-motor and a ball-screw based transmission system. A customized ultrasonic wave probe is installed in each flap, and is responsible for measuring wave elevation in the flap. A complex automation architecture was implemented, with three Programmable Logic Computers (PLCs), and a low-level software is responsible for all the interlocks and maintenance functions of the tank. Furthermore, all the control algorithms for the generation and absorption are implemented using higher level software (MATLAB /Simulink block diagrams). These algorithms calculate the motions of the wave makers both to generate and absorb the required wave field by taking into account the layout of the flaps and the limits of wave generation. The experimental transfer function that relates the flap amplitude to the wave elevation amplitude is used for the calculation of the motion of each flap. This paper describes the main features of the tank, followed by a detailed presentation of the whole automation system. It includes the measuring devices, signal conditioning, PLC and network architecture, real-time and synchronizing software and motor control loop. Finally, a validation of the whole automation system is presented, by means of the experimental analysis of the transfer function of the waves generated and the calculation of all the delays introduced by the automation system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational fluid dynamics, CFD, is becoming an essential tool in the prediction of the hydrodynamic efforts and flow characteristics of underwater vehicles for manoeuvring studies. However, when applied to the manoeuvrability of autonomous underwater vehicles, AUVs, most studies have focused on the de- termination of static coefficients without considering the effects of the vehicle control surface deflection. This paper analyses the hydrodynamic efforts generated on an AUV considering the combined effects of the control surface deflection and the angle of attack using CFD software based on the Reynolds-averaged Navier–Stokes formulations. The CFD simulations are also independently conducted for the AUV bare hull and control surface to better identify their individual and interference efforts and to validate the simulations by comparing the experimental results obtained in a towing tank. Several simulations of the bare hull case were conducted to select the k –ω SST turbulent model with the viscosity approach that best predicts its hydrodynamic efforts. Mesh sensitivity analyses were conducted for all simulations. For the flow around the control surfaces, the CFD results were analysed according to two different methodologies, standard and nonlinear. The nonlinear regression methodology provides better results than the standard methodology does for predicting the stall at the control surface. The flow simulations have shown that the occurrence of the control surface stall depends on a linear relationship between the angle of attack and the control surface deflection. This type of information can be used in designing the vehicle’s autopilot system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] During maximal whole body exercise VO2 peak is limited by O2 delivery. In turn, it is though that blood flow at near-maximal exercise must be restrained by the sympathetic nervous system to maintain mean arterial pressure. To determine whether enhancing vasodilation across the leg results in higher O2 delivery and leg VO2 during near-maximal and maximal exercise in humans, seven men performed two maximal incremental exercise tests on the cycle ergometer. In random order, one test was performed with and one without (control exercise) infusion of ATP (8 mg in 1 ml of isotonic saline solution) into the right femoral artery at a rate of 80 microg.kg body mass-1.min-1. During near-maximal exercise (92% of VO2 peak), the infusion of ATP increased leg vascular conductance (+43%, P<0.05), leg blood flow (+20%, 1.7 l/min, P<0.05), and leg O2 delivery (+20%, 0.3 l/min, P<0.05). No effects were observed on leg or systemic VO2. Leg O2 fractional extraction was decreased from 85+/-3 (control) to 78+/-4% (ATP) in the infused leg (P<0.05), while it remained unchanged in the left leg (84+/-2 and 83+/-2%; control and ATP; n=3). ATP infusion at maximal exercise increased leg vascular conductance by 17% (P<0.05), while leg blood flow tended to be elevated by 0.8 l/min (P=0.08). However, neither systemic nor leg peak VO2 values where enhanced due to a reduction of O2 extraction from 84+/-4 to 76+/-4%, in the control and ATP conditions, respectively (P<0.05). In summary, the VO2 of the skeletal muscles of the lower extremities is not enhanced by limb vasodilation at near-maximal or maximal exercise in humans. The fact that ATP infusion resulted in a reduction of O2 extraction across the exercising leg suggests a vasodilating effect of ATP on less-active muscle fibers and other noncontracting tissues and that under normal conditions these regions are under high vasoconstrictor influence to ensure the most efficient flow distribution of the available cardiac output to the most active muscle fibers of the exercising limb.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] To determine central and peripheral hemodynamic responses to upright leg cycling exercise, nine physically active men underwent measurements of arterial blood pressure and gases, as well as femoral and subclavian vein blood flows and gases during incremental exercise to exhaustion (Wmax). Cardiac output (CO) and leg blood flow (BF) increased in parallel with exercise intensity. In contrast, arm BF remained at 0.8 l/min during submaximal exercise, increasing to 1.2 +/- 0.2 l/min at maximal exercise (P < 0.05) when arm O(2) extraction reached 73 +/- 3%. The leg received a greater percentage of the CO with exercise intensity, reaching a value close to 70% at 64% of Wmax, which was maintained until exhaustion. The percentage of CO perfusing the trunk decreased with exercise intensity to 21% at Wmax, i.e., to approximately 5.5 l/min. For a given local Vo(2), leg vascular conductance (VC) was five- to sixfold higher than arm VC, despite marked hemoglobin deoxygenation in the subclavian vein. At peak exercise, arm VC was not significantly different than at rest. Leg Vo(2) represented approximately 84% of the whole body Vo(2) at intensities ranging from 38 to 100% of Wmax. Arm Vo(2) contributed between 7 and 10% to the whole body Vo(2). From 20 to 100% of Wmax, the trunk Vo(2) (including the gluteus muscles) represented between 14 and 15% of the whole body Vo(2). In summary, vasoconstrictor signals efficiently oppose the vasodilatory metabolites in the arms, suggesting that during whole body exercise in the upright position blood flow is differentially regulated in the upper and lower extremities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] A universal O2 sensor presumes that compensation for impaired O2 delivery is triggered by low O2 tension, but in humans, comparisons of compensatory responses to altered arterial O2 content (CaO2) or tension (PaO2) have not been reported. To directly compare cardiac output (QTOT) and leg blood flow (LBF) responses to a range of CaO2 and PaO2, seven healthy young men were studied during two-legged knee extension exercise with control hemoglobin concentration ([Hb] = 144.4 +/- 4 g/l) and at least 1 wk later after isovolemic hemodilution ([Hb] = 115 +/- 2 g/l). On each study day, subjects exercised twice at 30 W and on to voluntary exhaustion with an FIO2 of 0.21 or 0.11. The interventions resulted in two conditions with matched CaO2 but markedly different PaO2 (hypoxia and anemia) and two conditions with matched PaO2 and different CaO2 (hypoxia and anemia + hypoxia). PaO2 varied from 46 +/- 3 Torr in hypoxia to 95 +/- 3 Torr (range 37 to >100) in anemia (P < 0.001), yet LBF at exercise was nearly identical. However, as CaO2 dropped from 190 +/- 5 ml/l in control to 132 +/- 2 ml/l in anemia + hypoxia (P < 0.001), QTOT and LBF at 30 W rose to 12.8 +/- 0.8 and 7.2 +/- 0.3 l/min, respectively, values 23 and 47% above control (P < 0.01). Thus regulation of QTOT, LBF, and arterial O2 delivery to contracting intact human skeletal muscle is dependent for signaling primarily on CaO2, not PaO2. This finding suggests that factors related to CaO2 or [Hb] may play an important role in the regulation of blood flow during exercise in humans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Programa de doctorado en Oceanografía

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The velocity and mixing field of two turbulent jets configurations have been experimentally characterized by means of cold- and hot-wire anemometry in order to investigate the effects of the initial conditions on the flow development. In particular, experiments have been focused on the effect of the separation wall between the two streams on the flow field. The results of the experiments have pointed out that the wake behind a thick wall separating wall has a strong influence on the flow field evolution. For instance, for nearly unitary velocity ratios, a clear vortex shedding from the wall is observable. This phenomenon enhances the mixing between the inner and outer shear layer. This enhancement in the fluctuating activity is a consequence of a local absolute instability of the flow which, for a small range of velocity ratios, behaves as an hydrodynamic oscillator with no sensibility to external perturbations. It has been suggested indeed that this absolute instability can be used as a passive method to control the flow evolution. Finally, acoustic excitation has been applied to the near field in order to verify whether or not the observed vortex shedding behind the separating wall is due to a global oscillating mode as predicted by the theory. A new scaling relationship has been also proposed to determine the preferred frequency for nearly unitary velocity ratios. The proposed law takes into account both the Reynolds number and the velocity ratio dependence of this frequency and, therefore, improves all the previously proposed relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.