926 resultados para On Board Mass
Resumo:
Second generation antipsychotics (SGAs) have been linked to metabolic and bone disorders in clinical studies, but the mechanisms of these side effects remain unclear. Additionally, no studies have examined whether SGAs cause bone loss in mice. Using in vivo and in vitro modeling we examined the effects of risperidone, the most commonly prescribed SGA, on bone in C57BL6/J (B6) mice. Mice were treated with risperidone orally by food supplementation at a dose of 1.25 mg/kg daily for 5 and 8 weeks, starting at 3.5 weeks of age. Risperidone reduced trabecular BV/TV, trabecular number and percent cortical area. Trabecular histomorphometry demonstrated increased resorption parameters, with no change in osteoblast number or function. Risperidone also altered adipose tissue distribution such that white adipose tissue mass was reduced and liver had significantly higher lipid infiltration. Next, in order to tightly control risperidone exposure, we administered risperidone by chronic subcutaneous infusion with osmotic minipumps (0.5 mg/kg daily for 4 weeks) in 7 week old female B6 mice. Similar trabecular and cortical bone differences were observed compared to the orally treated groups (reduced trabecular BV/TV, and connectivity density, and reduced percent cortical area) with no change in body mass, percent body fat, glucose tolerance or insulin sensitivity. Unlike in orally treated mice, risperidone infusion reduced bone formation parameters (serum P1NP, MAR and BFR/BV). Resorption parameters were elevated, but this increase did not reach statistical significance. To determine if risperidone could directly affect bone cells, primary bone marrow cells were cultured with osteoclast or osteoblast differentiation media. Risperidone was added to culture medium in clinically relevant doses of 0, 2.5 or 25 ng/ml. The number of osteoclasts was significantly increased by addition in vitro of risperidone while osteoblast differentiation was not altered. These studies indicate that risperidone treatment can have negative skeletal consequences by direct activation of osteoclast activity and by indirect non-cell autonomous mechanisms. Our findings further support the tenet that the negative side effects of SGAs on bone mass should be considered when weighing potential risks and benefits, especially in children and adolescents who have not yet reached peak bone mass. This article is part of a Special Issue entitled: Interactions Between Bone, Adipose Tissue and Metabolism. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
INTRODUÇÃO: Recentes evidências indicam que a suplementação de creatina (Cr) é capaz de aumentar a densidade mineral óssea (DMO) no fêmur de ratos saudáveis em crescimento. Entretanto, há poucos estudos que testam a efetividade da suplementação desse nutriente em condições de perda óssea. OBJETIVO: Investigar o efeito da suplementação de Cr na DMO e no conteúdo mineral ósseo (CMO) de ratos espontaneamente hipertensos (SHR), um modelo experimental de baixa massa óssea. MATERIAIS E MÉTODOS: Dezesseis ratos SHR machos com 8 meses de idade foram randomizados em dois grupos experimentais pareados pelo peso corporal, a saber: 1) Pl: SHR tratados com placebo (água destilada; n = 8); e 2) Cr: SHR tratados com Cr (n = 8). Após nove semanas de suplementação os animais foram eutanasiados e o fêmur e a coluna vertebral (L1-L4) foram analisados por densitometria óssea (Dual Energy X-Ray Absorptiometry). RESULTADOS: Não houve diferença significativa na DMO (Pl = 0,249 ± 0,003 g/cm² vs. Cr = 0,249 ± 0,004 g/cm²; P = 0,95) e no CMO (Pl = 0,509 ± 0,150 g vs. Cr = 0,509 ± 0,017 g; P = 0,99) da coluna vertebral e na DMO (Pl = 0,210 ± 0,004 g/cm² vs. Cr = 0,206 ± 0,004 g/cm2;P = 0,49) e no CMO (Pl = 0,407 ± 0,021 g vs. Cr = 0,385 ± 0,021 g; P = 0,46) do fêmur total entre os grupos experimentais. CONCLUSÃO: Neste estudo, usando um modelo experimental de baixa massa óssea, a suplementação de Cr não afetou a massa óssea.
Resumo:
Biomass burning represents one of the largest sources of particulate matter to the atmosphere, which results in a significant perturbation to the Earth’s radiative balance coupled with serious negative impacts on public health. Globally, biomass burning aerosols are thought to exert a small warming effect of 0.03 Wm-2, however the uncertainty is 4 times greater than the central estimate. On regional scales, the impact is substantially greater, particularly in areas such as the Amazon Basin where large, intense and frequent burning occurs on an annual basis for several months (usually from August-October). Furthermore, a growing number of people live within the Amazon region, which means that they are subject to the deleterious effects on their health from exposure to substantial volumes of polluted air. Initial results from the South American Biomass Burning Analysis (SAMBBA) field experiment, which took place during September and October 2012 over Brazil, are presented here. A suite of instrumentation was flown on-board the UK Facility for Airborne Atmospheric Measurement (FAAM) BAe-146 research aircraft and was supported by ground based measurements, with extensive measurements made in Porto Velho, Rondonia. The aircraft sampled a range of conditions with sampling of fresh biomass burning plumes, regional haze and elevated biomass burning layers within the free troposphere. The physical, chemical and optical properties of the aerosols across the region will be characterized in order to establish the impact of biomass burning on regional air quality, weather and climate.
Resumo:
[EN] Diel Vertical Migrants (DVMs) are mainly zooplankton and micronekton which migrate upward from 400-500 m depth every night to feed on the productive epipelagic zone, coming back at dawn to the mesopelagic zone, where they defecate, excrete, and respire the ingested carbon. DVMs should contribute to the biological pump in the ocean and, accordingly, to the global CO2 balance. Although those migrants are mainly small fishes, cephalopods and crustaceans, the lanternfishes (myctophidae) usually contribute up to 80% of total DVMs biomass. Thus, myctophids may represent a pathway accounting for a substantial export of organic carbon to the deep ocean. However, the magnitude of this transport is still poorly known. In order to assess this active flux of carbon, we performed a preliminary study of mesopelagic organisms around the Canary Islands. Here we present the results of diet, daily rations and feeding chronology of Lobianchia dofleini, Hygophum hygomii and Ceratoscopelus maderensis, 3 dominant species of myctophids performing diel vertical migrations in the Subtropical Eastern North Atlantic Ocean. Samples were obtained on board the RV La Bocaina during June 2009. Myctophids were sorted and fixed in 4% buffered formalin and the stomach contents of target species were examined and weighted. Feeding chronology was approached by studying stomach fullness and state of digestion of prey items in individuals from hauls performed at different times and depths. Our results provide further information about lanternfishes feeding ecology in relation to their vertical migration patterns as well as their contribution to the biological carbon pump.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.
Resumo:
In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.
Resumo:
This thesis presents the outcomes of a Ph.D. course in telecommunications engineering. It is focused on the optimization of the physical layer of digital communication systems and it provides innovations for both multi- and single-carrier systems. For the former type we have first addressed the problem of the capacity in presence of several nuisances. Moreover, we have extended the concept of Single Frequency Network to the satellite scenario, and then we have introduced a novel concept in subcarrier data mapping, resulting in a very low PAPR of the OFDM signal. For single carrier systems we have proposed a method to optimize constellation design in presence of a strong distortion, such as the non linear distortion provided by satellites' on board high power amplifier, then we developed a method to calculate the bit/symbol error rate related to a given constellation, achieving an improved accuracy with respect to the traditional Union Bound with no additional complexity. Finally we have designed a low complexity SNR estimator, which saves one-half of multiplication with respect to the ML estimator, and it has similar estimation accuracy.
Resumo:
REX-ISOLDE ist ein Pilotexperiment zur Nachbeschleunigung radioaktiver Ionenstrahlen am on-line Massenseparator ISOLDE am CERN. Ein wichtiges Teilprojekt war die Realisierung der effizienten Umwandlung des kontinuierlichen niederenergetischen Ionenstrahles in kurze Ionenpulse hoher Qualität. Zu diesem Zweck wurde im Rahmen dieser Arbeit REXTRAP, eine gasgefüllte Penningfalle entwickelt, in Betrieb genommen und systematisch untersucht.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.
Resumo:
In dieser Arbeit wurden im Rahmen der UTOPIHAN- und HOHPEX04-Projekte Peroxid- und Formaldehydmessungen in der Troposphäre durchgeführt und wissenschaftlich interpretiert. Die Messungen während UTOPIHAN fanden dabei an Bord eines für Forschungszwecke umgerüsteten Flugzeuges (Learjet 35A) im Wesentlichen in der freien, insbesondere in der oberen Troposphäre über Europa statt. Die Messungen während HOHPEX04 waren hingegen als Bodenmessungen an der sich abwechselnd in der bodennahen Grenzschicht und in von dieser Schicht entkoppelten Luftmassen liegenden Bergstation Hohenpeißenberg (bayerisches Voralpenland) konzipiert. Um eine quantitative Auswertbarkeit der Messungen sicherzustellen, wurden die verwendeten, auf chemischer Derivatisierung und fluorimetrischer Detektion basierenden Messgeräte AL 2001CA (Peroxide) und AL 4021 (Formaldehyd) (AEROLASER) genau charakterisiert. Dabei wurde speziell die bekannte Ozoninterferenz beider Geräte in einer großen Zahl von Laborexperimenten mit unterschiedlichen Randbedingungen bezüglich Wasserdampf- und Kohlenwasserstoffgehalt der Luft untersucht. Für beide Verbindungen wurden Höhen- sowie Breitenprofile erstellt und mit Ergebnissen eines 3D-Chemie-Transport-Modells (CTM) sowie früherer Studien verglichen. In einem weiteren Kapitel werden Ergebnisse einer quantitativen Studie zum Einfluss hochreichender Konvektion auf das HCHO-Budget in der mittleren und oberen Troposphäre präsentiert. Diese Studie kommt zu dem Schluss, dass der rasche Aufwärtstransport von Vorläufergasen von HCHO und HOx wie Methanol, Aceton und sogar gut löslicher Spurengase wie CH3OOH beziehungsweise H2O2 aus der Grenzschicht einen signifikanten, auf Grund der längeren Lebensdauer von NOx über mehrere Tage andauernden und damit großräumigen Einfluss auf die Budgets von HCHO, HOx und auch O3 in der oberen Troposphäre haben kann. Die Befunde der Studie legen desweiteren nahe, dass fehlerhafte Modellvorhersagen für die NO-Mischungsverhältnisse in der Tropopausenregion, die zum Beispiel mit Mängeln des Modells bezüglich der Höhe der Konvektion und des Stratosphären-Troposphären-Austauschs zu tun haben, hauptverantwortlich sind für gefundene Differenzen zwischen Messdaten und dem verwendeten 3D-Chemie-Transport-Modell. Um die Signifikanz der Aussagen zu erhöhen, wurde eine Sensitivitätsstudie durchgeführt, in der die Konzentration einiger chemischer Verbindungen sowie die Photolyseraten variiert wurden. Eine weitere Studie zum Einfluss verschiedener Parameter auf das CH3OOH/H2O2-Verhältnis kommt zu dem Schluss, dass dieses Verhältnis keinen idealen Indikator für Wolkenprozessierung von Luftmassen darstellt, während eine signifikant positive Abweichung vom H2O2/H2O-Verhältnis in der oberen Troposphäre ein guter Indikator für rasch aufwärts transportierte Luftmassen sein kann. Im Rahmen dieser Studie werden auch Höhen- und Breitenprofile des CH3OOH/H2O2-Verhältnisses diskutiert. In einer letzten Untersuchung zu HCHO-Messungen am Observatorium Hohenpeißenberg im Sommer 2004 werden für die in zwei Windrichtungssektoren eingeteilten Daten Korrelationen anderer Spurengase wie O3, PAN, CO, NOy und Isopren mit HCHO interpretiert. In diesem Zusammenhang wird auch versucht, den beobachteten Tagesgang von HCHO zu erklären.
Resumo:
Canned tuna is one of the most widespread and recognizable fish commodities in the world. Over all oceans 80% of the total tuna catches are caught by purse seine fishery and in tropical waters their target species are: yellowfin (Thunnus albacares), bigeye (Thunnus obesus) and skipjack (Katsuwonus pelamis). Even if this fishing gear is claimed to be very selective, there are high levels of by-catch especially when operating under Fish Aggregating Devices (FADs). The main problem is underestimation of by-catch data. In order to solve this problem the scientific community has developed many specific programs (e.g. Observe Program) to collect data about both target species and by-catch with observers onboard. The purposes of this study are to estimate the quantity and composition of target species and by-catch by tuna purse seiner fishery operating in tropical waters and to underline a possible seasonal variability in the by-catch ratio (tunas versus by-catch). Data were collected with the French scientific program ”Observe” on board of the French tuna purse seiner “Via Avenir” during a fishing trip in the Gulf of Guinea (C-E Atlantic) from August to September 2012. Furthermore some by-catch specimens have been sampled to obtain more information about size class composition. In order to achieve those purposes we have shared our data with the French Institute of Research for the Development (IRD), which has data collected by observers onboard in the same study area. Yellowfin tuna results to be the main specie caught in all trips considered (around 71% of the total catches) especially on free swimming schools (FSC) sets. Instead skipjack tuna is the main specie caught under FADs. Different percentages of by-catch with the two fishing modes are observed: the by-catch incidence is higher on FADs sets (96.5% of total by-catch) than on FSC sets (3.5%) and the main category of by-catch is little-tuna (73%). When pooling data for both fishing sets used in purse seine fishery the overall by-catch/catch ratio is 5%, a lower level than in other fishing gears like long-lining and trawling.
Resumo:
In dieser Arbeit wurde die Elektronenemission von Nanopartikeln auf Oberflächen mittels spektroskopischen Photoelektronenmikroskopie untersucht. Speziell wurden metallische Nanocluster untersucht, als selbstorganisierte Ensembles auf Silizium oder Glassubstraten, sowie ferner ein Metall-Chalcogenid (MoS2) Nanoröhren-Prototyp auf Silizium. Der Hauptteil der Untersuchungen war auf die Wechselwirkung von fs-Laserstrahlung mit den Nanopartikeln konzentriert. Die Energie der Lichtquanten war kleiner als die Austrittsarbeit der untersuchten Proben, so dass Ein-Photonen-Photoemission ausgeschlossen werden konnte. Unsere Untersuchungen zeigten, dass ausgehend von einem kontinuierlichen Metallfilm bis hin zu Clusterfilmen ein anderer Emissionsmechanismus konkurrierend zur Multiphotonen-Photoemission auftritt und für kleine Cluster zu dominieren beginnt. Die Natur dieses neuen Mechanismus` wurde durch verschiedenartige Experimente untersucht. Der Übergang von einem kontinuierlichen zu einem Nanopartikelfilm ist begleitet von einer Zunahme des Emissionsstroms von mehr als eine Größenordnung. Die Photoemissions-Intensität wächst mit abnehmender zeitlicher Breite des Laserpulses, aber diese Abhängigkeit wird weniger steil mit sinkender Partikelgröße. Die experimentellen Resultate wurden durch verschiedene Elektronenemissions-Mechanismen erklärt, z.B. Multiphotonen-Photoemission (nPPE), thermionische Emission und thermisch unterstützte nPPE sowie optische Feldemission. Der erste Mechanismus überwiegt für kontinuierliche Filme und Partikel mit Größen oberhalb von mehreren zehn Nanometern, der zweite und dritte für Filme von Nanopartikeln von einer Größe von wenigen Nanometern. Die mikrospektroskopischen Messungen bestätigten den 2PPE-Emissionsmechanismus von dünnen Silberfilmen bei „blauer“ Laseranregung (hν=375-425nm). Das Einsetzen des Ferminiveaus ist relativ scharf und verschiebt sich um 2hν, wenn die Quantenenergie erhöht wird, wogegen es bei „roter“ Laseranregung (hν=750-850nm) deutlich verbreitert ist. Es zeigte sich, dass mit zunehmender Laserleistung die Ausbeute von niederenergetischen Elektronen schwächer zunimmt als die Ausbeute von höherenergetischen Elektronen nahe der Fermikante in einem Spektrum. Das ist ein klarer Hinweis auf eine Koexistenz verschiedener Emissionsmechanismen in einem Spektrum. Um die Größenabhängigkeit des Emissionsverhaltens theoretisch zu verstehen, wurde ein statistischer Zugang zur Lichtabsorption kleiner Metallpartikel abgeleitet und diskutiert. Die Elektronenemissionseigenschaften bei Laseranregung wurden in zusätzlichen Untersuchungen mit einer anderen Anregungsart verglichen, der Passage eines Tunnelstroms durch einen Metall-Clusterfilm nahe der Perkolationsschwelle. Die elektrischen und Emissionseigenschaften von stromtragenden Silberclusterfilmen, welche in einer schmalen Lücke (5-25 µm Breite) zwischen Silberkontakten auf einem Isolator hergestellt wurden, wurden zum ersten Mal mit einem Emissions-Elektronenmikroskop (EEM) untersucht. Die Elektronenemission beginnt im nicht-Ohmschen Bereich der Leitungsstrom-Spannungskurve des Clusterfilms. Wir untersuchten das Verhalten eines einzigen Emissionszentrums im EEM. Es zeigte sich, dass die Emissionszentren in einem stromleitenden Silberclusterfilm Punktquellen für Elektronen sind, welche hohe Emissions-Stromdichten (mehr als 100 A/cm2) tragen können. Die Breite der Energieverteilung der Elektronen von einem einzelnen Emissionszentrum wurde auf etwa 0.5-0.6 eV abgeschätzt. Als Emissionsmechanismus wird die thermionische Emission von dem „steady-state“ heißen Elektronengas in stromdurchflossenen metallischen Partikeln vorgeschlagen. Größenselektierte, einzelne auf Si-Substraten deponierte MoS2-Nanoröhren wurden mit einer Flugzeit-basierten Zweiphotonen-Photoemissions-Spektromikroskopie untersucht. Die Nanoröhren-Spektren wiesen bei fs-Laser Anregung eine erstaunlich hohe Emissionsintensität auf, deutlich höher als die SiOx Substratoberfläche. Dagegen waren die Röhren unsichtbar bei VUV-Anregung bei hν=21.2 eV. Eine ab-initio-Rechnung für einen MoS2-Slab erklärt die hohe Intensität durch eine hohe Dichte freier intermediärer Zustände beim Zweiphotonen-Übergang bei hν=3.1 eV.