904 resultados para Parallel and Distributed Processing
Resumo:
Microencapsulation can be an alternative to minimize lycopene instability. Thus, the aim of this study was to microencapsulate lycopene by spray drying, using a modified starch (Capsul (R)) as an encapsulating agent, and to assess the functionality of the capsules applying them in cake. The quantity of lycopene was varied at 5, 10 and 15% in a solution containing 30% of solids in order to obtain the microcapsules. These microcapsules were evaluated as to encapsulation efficiency and morphology and then submitted to a stability test and applied in cakes. Encapsulation efficiency values varied between 21 and 29%. The microcapsules had a rounded outer surface with the formation of concavities and they varied in size. The stability test revealed that microencapsulation offered greater protection to lycopene compared to its free form and it was observed that the microcapsules were able to release pigment and color the studied food system in a homogenous manner. (C) 2011 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved.
Resumo:
There has been much discussion on the importance of Brazilian ethanol in promoting a more sustainable society. However, there is a lack of analysis of whether sugarcane plants/factories that produce this ethanol are environmentally suitable. Thus, the objective of this study was to analyse stages of environmental management at four Brazilian ethanol-producing plants, examining the management practices adopted and the factors behind this adoption. The results indicate that (1) only one of the four plants is in the environmentally proactive stage; (2) all plants are adopting operational and organisational environmental management practices; (3) all plants have problems in communicating environmental management practices; and (4) the plant with the most advanced environmental management makes intense use of communication practices and is strongly oriented towards a more environmentally aware international market. This paper is an attempt to explain the complex relationship between the evolution of environmental management, environmental practices and motivation using a framework. The implications for society, plant directors and scholars are described, as well as the study's limitations.
Resumo:
Objective: To evaluate the degree of perception of laypersons, dental professionals, and dental students regarding dental esthetics in cases with mandibular central incisor extraction. Materials and Methods: Using a smile photograph of a person with normal occlusion and all teeth, modifications were made to reflect the extraction of a mandibular incisor of various compositions and sizes. For this purpose a program specifically for image manipulation (Adobe Photoshop CS3, Adobe Systems Inc) was used. After manipulation the images were printed on photographic paper, attached to a questionnaire and distributed to laypersons, dental professionals, and dental students (n = 90) to evaluate the degree of perception and esthetic using a scale of attractiveness, where 0 = hardly attractive, 5 = attractive, and 10 = very attractive. The differences between examiners were checked by the Mann-Whitney test. All the statistics were performed with a confidence level of 95%. Results: The results demonstrated the skill of the dental professionals and dental students in perceiving the difference between cases of normal occlusion and cases where an incisor was lacking (P < .05). The photograph in which the lateral incisors were shown to be larger than the central incisor was the one that obtained the highest value among the cases of extraction in all groups of evaluators. Conclusions: It can be concluded that dental professionals and dental students are more skillful at identifying deviation from normality. In addition, central incisor extraction should always be discarded when there are other treatment options available. (Angle Orthod. 2012;82:732-738.)
Resumo:
This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.
Resumo:
The aim of this study was to evaluate the correlation between the morphology of the mandibular dental arch and the maxillary central incisor crown. Cast models from 51 Caucasian individuals, older than 15 years, with optimal occlusion, no previous orthodontic treatment, featuring 4 of the 6 keys to normal occlusion by Andrews (the first being mandatory) were observed. The models were digitalized using a 3D scanner, and images of the maxillary central incisor and mandibular dental arch were obtained. These were printed and placed in an album below pre-set models of arches and dental crowns, and distributed to 12 dental surgeons, who were asked to choose which shape was most in accordance with the models and crown presented. The Kappa test was performed to evaluate the concordance among evaluators while the chi-square test was used to verify the association between the dental arch and central incisor morphology, at a 5% significance level. The Kappa test showed moderate agreement among evaluators for both variables of this study, and the chi-square test showed no significant association between tooth shape and mandibular dental arch morphology. It may be concluded that the use of arch morphology as a diagnostic method to determine the shape of the maxillary central incisor is not appropriate. Further research is necessary to assess tooth shape using a stricter scientific basis.
Resumo:
Twelve ileal cannulated pigs (30.9 ± 2.7 kg) were used to determine the apparent (AID) and standardized (SID) ileal digestibility of protein and AA in canola meals (CM) derived from black- (BNB) and yellow-seeded (BNY) Brassica napus canola and yellow-seeded Brassica juncea (BJY). The meals were produced using either the conventional pre-press solvent extraction process (regular meal) or a new, vacuum-assisted cold process of meal de-solventization (white flakes) to provide 6 different meals. Six cornstarch-based diets containing 35% canola meal as the sole source of protein in a 3 (variety) × 2 (processing) factorial arrangement were randomly allotted to pigs in a 6 × 7 incomplete Latin square design to have 6 replicates per diet. A 5% casein diet was fed to estimate endogenous AA losses. Canola variety and processing method interacted for the AID of DM (P = 0.048), N (P = 0.010), and all AA (P < 0.05), except for Arg, Lys, Phe, Asp, Glu, and Pro. Canola variety affected or tended to affect the AID of most AA but had no effect on the AID of Lys, Met, Val, Cys, and Pro, whereas processing method had an effect on only Lys and Asp and tended to affect the AID of Thr, Gly and Ser. The effects of canola variety, processing method, and their interaction on the SID values for N and AA followed a similar pattern as for AID values. For the white flakes, SID of N in BJY (74.2%) was lower than in BNY and BNB, whose values averaged 78.5%; however, among the regular meals, BJY had a greater SID value for N than BNY and BNB (variety × processing, P = 0.015). For the white flakes, the SID of Ile (86.4%), Leu (87.6%), Lys (88.9%), Thr (87.6%) and Val (84.2%) in BNB were greater than BNY and BJY. Opposite results were observed for the regular processing, with SID of Lys (84.1%), Met (89.5%), Thr (84.1%), and Val (83.6%) being greater in BJY, followed by BNB and BNY(variety × processing, P < 0.057). The SID of Met was greatest for the white flakes (90.2%) but least for the regular processing (83.0%) in BNY (variety × processing, P < 0.057). It was concluded that the AID and SID of N and AA of the CM tested varied according to canola variety and the processing method used. Overall, the SID values for Ile, Leu, Lys, Met, Thr, and Val averaged across CM types and processing methods were 81.8, 82.6, 83.4, 85.9, 80.8, and 78.4%, respectively.
Resumo:
DNA damage induced by ultraviolet (UV) radiation can be removed by nucleotide excision repair through two sub-pathways, one general (GGR) and the other specific for transcribed DNA (TCR), and the processing of unrepaired lesions trigger signals that may lead to cell death. These signals involve the tumor suppressor p53 protein, a central regulator of cell responses to DNA damage, and the E3 ubiquitin ligase Mdm2, that forms a feedback regulatory loop with p53. The involvement of cell cycle and transcription on the signaling to apoptosis was investigated in UVB-irradiated synchronized, DNA repair proficient, CS-B (TCR-deficient) and XP-C (GGR-deficient) primary human fibroblasts. Cells were irradiated in the G1 phase of the cell cycle, with two doses with equivalent levels of apoptosis (low and high), defined for each cell line. In the three cell lines, the low doses of UVB caused only a transient delay in progression to the S phase, whereas the high doses induced permanent cell cycle arrest. However, while accumulation of Mdm2 correlated well with the recovery from transcription inhibition at the low doses for normal and CS-B fibroblasts, for XP-C cells this protein was shown to be accumulated even at UVB doses that induced high levels of apoptosis. Thus, UVB-induced accumulation of Mdm2 is critical for counteracting p53 activation and apoptosis avoidance, but its effect is limited due to transcription inhibition. However, in the case of XP-C cells, an excess of unrepaired DNA damage would be sufficient to block S phase progression, which would signal to apoptosis, independent of Mdm2 accumulation. The data clearly discriminate DNA damage signals that lead to cell death, depending on the presence of UVB-induced DNA damage in replicating or transcribing regions.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Selective oxidation is one of the simplest functionalization methods and essentially all monomers used in manufacturing artificial fibers and plastics are obtained by catalytic oxidation processes. Formally, oxidation is considered as an increase in the oxidation number of the carbon atoms, then reactions such as dehydrogenation, ammoxidation, cyclization or chlorination are all oxidation reactions. In this field, most of processes for the synthesis of important chemicals used vanadium oxide-based catalysts. These catalytic systems are used either in the form of multicomponent mixed oxides and oxysalts, e.g., in the oxidation of n-butane (V/P/O) and of benzene (supported V/Mo/O) to maleic anhydride, or in the form of supported metal oxide, e.g., in the manufacture of phthalic anhydride by o-xylene oxidation, of sulphuric acid by oxidation of SO2, in the reduction of NOx with ammonia and in the ammoxidation of alkyl aromatics. In addition, supported vanadia catalysts have also been investigated for the oxidative dehydrogenation of alkanes to olefins , oxidation of pentane to maleic anhydride and the selective oxidation of methanol to formaldehyde or methyl formate [1]. During my PhD I focused my work on two gas phase selective oxidation reactions. The work was done at the Department of Industrial Chemistry and Materials (University of Bologna) in collaboration with Polynt SpA. Polynt is a leader company in the development, production and marketing of catalysts for gas-phase oxidation. In particular, I studied the catalytic system for n-butane oxidation to maleic anhydride (fluid bed technology) and for o-xylene oxidation to phthalic anhydride. Both reactions are catalyzed by systems based on vanadium, but catalysts are completely different. Part A is dedicated to the study of V/P/O catalyst for n-butane selective oxidation, while in the Part B the results of an investigation on TiO2-supported V2O5, catalyst for o-xylene oxidation are showed. In Part A, a general introduction about the importance of maleic anhydride, its uses, the industrial processes and the catalytic system are reported. The reaction is the only industrial direct oxidation of paraffins to a chemical intermediate. It is produced by n-butane oxidation either using fixed bed and fluid bed technology; in both cases the catalyst is the vanadyl pyrophosphate (VPP). Notwithstanding the good performances, the yield value didn’t exceed 60% and the system is continuously studied to improve activity and selectivity. The main open problem is the understanding of the real active phase working under reaction conditions. Several articles deal with the role of different crystalline and/or amorphous vanadium/phosphorous (VPO) compounds. In all cases, bulk VPP is assumed to constitute the core of the active phase, while two different hypotheses have been formulated concerning the catalytic surface. In one case the development of surface amorphous layers that play a direct role in the reaction is described, in the second case specific planes of crystalline VPP are assumed to contribute to the reaction pattern, and the redox process occurs reversibly between VPP and VOPO4. Both hypotheses are supported also by in-situ characterization techniques, but the experiments were performed with different catalysts and probably under slightly different working conditions. Due to complexity of the system, these differences could be the cause of the contradictions present in literature. Supposing that a key role could be played by P/V ratio, I prepared, characterized and tested two samples with different P/V ratio. Transformation occurring on catalytic surfaces under different conditions of temperature and gas-phase composition were studied by means of in-situ Raman spectroscopy, trying to investigate the changes that VPP undergoes during reaction. The goal is to understand which kind of compound constituting the catalyst surface is the most active and selective for butane oxidation reaction, and also which features the catalyst should possess to ensure the development of this surface (e.g. catalyst composition). On the basis of results from this study, it could be possible to project a new catalyst more active and selective with respect to the present ones. In fact, the second topic investigated is the possibility to reproduce the surface active layer of VPP onto a support. In general, supportation is a way to improve mechanical features of the catalysts and to overcome problems such as possible development of local hot spot temperatures, which could cause a decrease of selectivity at high conversion, and high costs of catalyst. In literature it is possible to find different works dealing with the development of supported catalysts, but in general intrinsic characteristics of VPP are worsened due to the chemical interaction between active phase and support. Moreover all these works deal with the supportation of VPP; on the contrary, my work is an attempt to build-up a V/P/O active layer on the surface of a zirconia support by thermal treatment of a precursor obtained by impregnation of a V5+ salt and of H3PO4. In-situ Raman analysis during the thermal treatment, as well as reactivity tests are used to investigate the parameters that may influence the generation of the active phase. Part B is devoted to the study of o-xylene oxidation of phthalic anhydride; industrially, the reaction is carried out in gas-phase using as catalysts a supported system formed by V2O5 on TiO2. The V/Ti/O system is quite complex; different vanadium species could be present on the titania surface, as a function of the vanadium content and of the titania surface area: (i) V species which is chemically bound to the support via oxo bridges (isolated V in octahedral or tetrahedral coordination, depending on the hydration degree), (ii) a polymeric species spread over titania, and (iii) bulk vanadium oxide, either amorphous or crystalline. The different species could have different catalytic properties therefore changing the relative amount of V species can be a way to optimize the catalytic performances of the system. For this reason, samples containing increasing amount of vanadium were prepared and tested in the oxidation of o-xylene, with the aim of find a correlations between V/Ti/O catalytic activity and the amount of the different vanadium species. The second part deals with the role of a gas-phase promoter. Catalytic surface can change under working conditions; the high temperatures and a different gas-phase composition could have an effect also on the formation of different V species. Furthermore, in the industrial practice, the vanadium oxide-based catalysts need the addition of gas-phase promoters in the feed stream, that although do not have a direct role in the reaction stoichiometry, when present leads to considerable improvement of catalytic performance. Starting point of my investigation is the possibility that steam, a component always present in oxidation reactions environment, could cause changes in the nature of catalytic surface under reaction conditions. For this reason, the dynamic phenomena occurring at the surface of a 7wt% V2O5 on TiO2 catalyst in the presence of steam is investigated by means of Raman spectroscopy. Moreover a correlation between the amount of the different vanadium species and catalytic performances have been searched. Finally, the role of dopants has been studied. The industrial V/Ti/O system contains several dopants; the nature and the relative amount of promoters may vary depending on catalyst supplier and on the technology employed for the process, either a single-bed or a multi-layer catalytic fixed-bed. Promoters have a quite remarkable effect on both activity and selectivity to phthalic anhydride. Their role is crucial, and the proper control of the relative amount of each component is fundamental for the process performance. Furthermore, it can not be excluded that the same promoter may play different role depending on reaction conditions (T, composition of gas phase..). The reaction network of phthalic anhydride formation is very complex and includes several parallel and consecutive reactions; for this reason a proper understanding of the role of each dopant cannot be separated from the analysis of the reaction scheme. One of the most important promoters at industrial level, which is always present in the catalytic formulations is Cs. It is known that Cs plays an important role on selectivity to phthalic anhydride, but the reasons of this phenomenon are not really clear. Therefore the effect of Cs on the reaction scheme has been investigated at two different temperature with the aim of evidencing in which step of the reaction network this promoter plays its role.
Resumo:
The main reasons for the attention focused on ceramics as possible structural materials are their wear resistance and the ability to operate with limited oxidation and ablation at temperatures above 2000°C. Hence, this work is devoted to the study of two classes of materials which can satisfy these requirements: silicon carbide -based ceramics (SiC) for wear applications and borides and carbides of transition metals for ultra-high temperatures applications (UHTCs). SiC-based materials: Silicon carbide is a hard ceramic, which finds applications in many industrial sectors, from heat production, to automotive engineering and metals processing. In view of new fields of uses, SiC-based ceramics were produced with addition of 10-30 vol% of MoSi2, in order to obtain electro conductive ceramics. MoSi2, indeed, is an intermetallic compound which possesses high temperature oxidation resistance, high electrical conductivity (21·10-6 Ω·cm), relatively low density (6.31 g/cm3), high melting point (2030°C) and high stiffness (440 GPa). The SiC-based ceramics were hot pressed at 1900°C with addition of Al2O3-Y2O3 or Y2O3-AlN as sintering additives. The microstructure of the composites and of the reference materials, SiC and MoSi2, were studied by means of conventional analytical techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (SEM-EDS). The composites showed a homogeneous microstructure, with good dispersion of the secondary phases and low residual porosity. The following thermo-mechanical properties of the SiC-based materials were measured: Vickers hardness (HV), Young’s modulus (E), fracture toughness (KIc) and room to high temperature flexural strength (σ). The mechanical properties of the composites were compared to those of two monolithic SiC and MoSi2 materials and resulted in a higher stiffness, fracture toughness and slightly higher flexural resistance. Tribological tests were also performed in two configurations disco-on-pin and slideron cylinder, aiming at studying the wear behaviour of SiC-MoSi2 composites with Al2O3 as counterfacing materials. The tests pointed out that the addition of MoSi2 was detrimental owing to a lower hardness in comparison with the pure SiC matrix. On the contrary, electrical measurements revealed that the addition of 30 vol% of MoSi2, rendered the composite electroconductive, lowering the electrical resistance of three orders of magnitude. Ultra High Temperature Ceramics: Carbides, borides and nitrides of transition metals (Ti, Zr, Hf, Ta, Nb, Mo) possess very high melting points and interesting engineering properties, such as high hardness (20-25 GPa), high stiffness (400-500 GPa), flexural strengths which remain unaltered from room temperature to 1500°C and excellent corrosion resistance in aggressive environment. All these properties place the UHTCs as potential candidates for the development of manoeuvrable hypersonic flight vehicles with sharp leading edges. To this scope Zr- and Hf- carbide and boride materials were produced with addition of 5-20 vol% of MoSi2. This secondary phase enabled the achievement of full dense composites at temperature lower than 2000°C and without the application of pressure. Besides the conventional microstructure analyses XRD and SEM-EDS, transmission electron microscopy (TEM) was employed to explore the microstructure on a small length scale to disclose the effective densification mechanisms. A thorough literature analysis revealed that neither detailed TEM work nor reports on densification mechanisms are available for this class of materials, which however are essential to optimize the sintering aids utilized and the processing parameters applied. Microstructural analyses, along with thermodynamics and crystallographic considerations, led to disclose of the effective role of MoSi2 during sintering of Zrand Hf- carbides and borides. Among the investigated mechanical properties (HV, E, KIc, σ from room temperature to 1500°C), the high temperature flexural strength was improved due to the protective and sealing effect of a silica-based glassy phase, especially for the borides. Nanoindentation tests were also performed on HfC-MoSi2 composites in order to extract hardness and elastic modulus of the single phases. Finally, arc jet tests on HfC- and HfB2-based composites confirmed the excellent oxidation behaviour of these materials under temperature exceeding 2000°C; no cracking or spallation occurred and the modified layer was only 80-90 μm thick.
Resumo:
Actual trends in software development are pushing the need to face a multiplicity of diverse activities and interaction styles characterizing complex and distributed application domains, in such a way that the resulting dynamics exhibits some grade of order, i.e. in terms of evolution of the system and desired equilibrium. Autonomous agents and Multiagent Systems are argued in literature as one of the most immediate approaches for describing such a kind of challenges. Actually, agent research seems to converge towards the definition of renewed abstraction tools aimed at better capturing the new demands of open systems. Besides agents, which are assumed as autonomous entities purposing a series of design objectives, Multiagent Systems account new notions as first-class entities, aimed, above all, at modeling institutional/organizational entities, placed for normative regulation, interaction and teamwork management, as well as environmental entities, placed as resources to further support and regulate agent work. The starting point of this thesis is recognizing that both organizations and environments can be rooted in a unifying perspective. Whereas recent research in agent systems seems to account a set of diverse approaches to specifically face with at least one aspect within the above mentioned, this work aims at proposing a unifying approach where both agents and their organizations can be straightforwardly situated in properly designed working environments. In this line, this work pursues reconciliation of environments with sociality, social interaction with environment based interaction, environmental resources with organizational functionalities with the aim to smoothly integrate the various aspects of complex and situated organizations in a coherent programming approach. Rooted in Agents and Artifacts (A&A) meta-model, which has been recently introduced both in the context of agent oriented software engineering and programming, the thesis promotes the notion of Embodied Organizations, characterized by computational infrastructures attaining a seamless integration between agents, organizations and environmental entities.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.