930 resultados para drought reconstruction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A single picture provides a largely incomplete representation of the scene one is looking at. Usually it reproduces only a limited spatial portion of the scene according to the standpoint and the viewing angle, besides it contains only instantaneous information. Thus very little can be understood on the geometrical structure of the scene, the position and orientation of the observer with respect to it remaining also hard to guess. When multiple views, taken from different positions in space and time, observe the same scene, then a much deeper knowledge is potentially achievable. Understanding inter-views relations enables construction of a collective representation by fusing the information contained in every single image. Visual reconstruction methods confront with the formidable, and still unanswered, challenge of delivering a comprehensive representation of structure, motion and appearance of a scene from visual information. Multi-view visual reconstruction deals with the inference of relations among multiple views and the exploitation of revealed connections to attain the best possible representation. This thesis investigates novel methods and applications in the field of visual reconstruction from multiple views. Three main threads of research have been pursued: dense geometric reconstruction, camera pose reconstruction, sparse geometric reconstruction of deformable surfaces. Dense geometric reconstruction aims at delivering the appearance of a scene at every single point. The construction of a large panoramic image from a set of traditional pictures has been extensively studied in the context of image mosaicing techniques. An original algorithm for sequential registration suitable for real-time applications has been conceived. The integration of the algorithm into a visual surveillance system has lead to robust and efficient motion detection with Pan-Tilt-Zoom cameras. Moreover, an evaluation methodology for quantitatively assessing and comparing image mosaicing algorithms has been devised and made available to the community. Camera pose reconstruction deals with the recovery of the camera trajectory across an image sequence. A novel mosaic-based pose reconstruction algorithm has been conceived that exploit image-mosaics and traditional pose estimation algorithms to deliver more accurate estimates. An innovative markerless vision-based human-machine interface has also been proposed, so as to allow a user to interact with a gaming applications by moving a hand held consumer grade camera in unstructured environments. Finally, sparse geometric reconstruction refers to the computation of the coarse geometry of an object at few preset points. In this thesis, an innovative shape reconstruction algorithm for deformable objects has been designed. A cooperation with the Solar Impulse project allowed to deploy the algorithm in a very challenging real-world scenario, i.e. the accurate measurements of airplane wings deformations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The OPERA experiment aims at the direct observation of ν_mu -> ν_tau oscillations in the CNGS (CERN Neutrinos to Gran Sasso) neutrino beam produced at CERN; since the ν_e contamination in the CNGS beam is low, OPERA will also be able to study the sub-dominant oscillation channel ν_mu -> ν_e. OPERA is a large scale hybrid apparatus divided in two supermodules, each equipped with electronic detectors, an iron spectrometer and a highly segmented ~0.7 kton target section made of Emulsion Cloud Chamber (ECC) units. During my research work in the Bologna Lab. I have taken part to the set-up of the automatic scanning microscopes studying and tuning the scanning system performances and efficiencies with emulsions exposed to a test beam at CERN in 2007. Once the triggered bricks were distributed to the collaboration laboratories, my work was centered on the procedure used for the localization and the reconstruction of neutrino events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this treatise we consider finite systems of branching particles where the particles move independently of each other according to d-dimensional diffusions. Particles are killed at a position dependent rate, leaving at their death position a random number of descendants according to a position dependent reproduction law. In addition particles immigrate at constant rate (one immigrant per immigration time). A process with above properties is called a branching diffusion withimmigration (BDI). In the first part we present the model in detail and discuss the properties of the BDI under our basic assumptions. In the second part we consider the problem of reconstruction of the trajectory of a BDI from discrete observations. We observe positions of the particles at discrete times; in particular we assume that we have no information about the pedigree of the particles. A natural question arises if we want to apply statistical procedures on the discrete observations: How can we find couples of particle positions which belong to the same particle? We give an easy to implement 'reconstruction scheme' which allows us to redraw or 'reconstruct' parts of the trajectory of the BDI with high accuracy. Moreover asymptotically the whole path can be reconstructed. Further we present simulations which show that our partial reconstruction rule is tractable in practice. In the third part we study how the partial reconstruction rule fits into statistical applications. As an extensive example we present a nonparametric estimator for the diffusion coefficient of a BDI where the particles move according to one-dimensional diffusions. This estimator is based on the Nadaraya-Watson estimator for the diffusion coefficient of one-dimensional diffusions and it uses the partial reconstruction rule developed in the second part above. We are able to prove a rate of convergence of this estimator and finally we present simulations which show that the estimator works well even if we leave our set of assumptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis contributed to the volcanic hazard assessment through the reconstruction of some historical flank eruptions of Etna in order to obtain quantitative data (volumes, effusion rates, etc.) for characterizing the recent effusive activity, quantifying the impact on the territory and defining mitigation actions for reducing the volcanic risk as for example containment barriers. The reconstruction was based on a quantitative approach using data extracted from aerial photographs and topographic maps. The approach allows to obtain the temporal evolution of the lava flow field and estimating the Time Average Discharge Rate (TADR) by dividing the volume emplaced over a given time interval for the corresponding duration. The analysis concerned the 2001, 1981 and 1928 Etna eruptions. The choice of these events is linked to their impact on inhabited areas. The results of the analysis showed an extraordinarily high effusion rate for the 1981 and 1928 eruptions (over 600 m^3/s), unusual for Etna eruptions. For the 1981 Etna eruption an eruptive model was proposed to explain the high discharge rate. The obtained TADRs were used as input data for simulations of the propagation of the lava flows for evaluating different scenarios of volcanic hazard and analyse different mitigation actions against lava flow invasion. It was experienced how numerical simulations could be adopted for evaluating the effectiveness of barrier construction and for supporting their optimal design. In particular, the gabions were proposed as an improvement for the construction of barriers with respect to the earthen barriers. The gabion barriers allow to create easily modular structures reducing the handled volumes and the intervention time. For evaluating operational constrain an experimental test was carried out to test the filling of the gabions with volcanic rock and evaluating their deformation during transport and placement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The procedure for event location in OPERA ECC has been optimazed for penetrating particles while is less efficient for electrons. For this reason new procedure has been defined in order to recover event with an electromagnetic shower in its final state not located with the standard one. The new procedure include the standard procedure during which several electromagnetic shower hint has been defined by means of the available data. In case the event is not located, the presence of an electromagnetic shower hint trigger a dedicated procedure. The old and new location procedure has been then simulated in order to obtain the standard location efficiency and the possible gain due to the new one for the events with electromagnetic shower. Finally a Data-MC comparison has been performed for the 2008 and 2009 runs for what concern the NC in order to validate the Monte Carlo. Then the expected electron neutrino interactions for the 2008 and 2009 runs has been evaluated and compared with the available data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis work has been developed in the framework of a new experimental campaign, proposed by the NUCL-EX Collaboration (INFN III Group), in order to progress in the understanding of the statistical properties of light nuclei, at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation reactions. The determination of the nuclear level density in the A~20 region, the understanding of the statistical behavior of light nuclei with excitation energies ~3 A.MeV, and the measurement of observables linked to the presence of cluster structures of nuclear excited levels are the main physics goals of this work. On the theory side, the contribution to this project given by this work lies in the development of a dedicated Monte-Carlo Hauser-Feshbach code for the evaporation of the compound nucleus. The experimental part of this thesis has consisted in the participation to the measurement 12C+12C at 95 MeV beam energy, at Laboratori Nazionali di Legnaro - INFN, using the GARFIELD+Ring Counter(RCo) set-up, from the beam-time request to the data taking, data reduction, detector calibrations and data analysis. Different results of the data analysis are presented in this thesis, together with a theoretical study of the system, performed with the new statistical decay code. As a result of this work, constraints on the nuclear level density at high excitation energy for light systems ranging from C up to Mg are given. Moreover, pre-equilibrium effects, tentatively interpreted as alpha-clustering effects, are put in evidence, both in the entrance channel of the reaction and in the dissipative dynamics on the path towards thermalisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quality control of medical radiological systems is of fundamental importance, and requires efficient methods for accurately determine the X-ray source spectrum. Straightforward measurements of X-ray spectra in standard operating require the limitation of the high photon flux, and therefore the measure has to be performed in a laboratory. However, the optimal quality control requires frequent in situ measurements which can be only performed using a portable system. To reduce the photon flux by 3 magnitude orders an indirect technique based on the scattering of the X-ray source beam by a solid target is used. The measured spectrum presents a lack of information because of transport and detection effects. The solution is then unfolded by solving the matrix equation that represents formally the scattering problem. However, the algebraic system is ill-conditioned and, therefore, it is not possible to obtain a satisfactory solution. Special strategies are necessary to circumvent the ill-conditioning. Numerous attempts have been done to solve this problem by using purely mathematical methods. In this thesis, a more physical point of view is adopted. The proposed method uses both the forward and the adjoint solutions of the Boltzmann transport equation to generate a better conditioned linear algebraic system. The procedure has been tested first on numerical experiments, giving excellent results. Then, the method has been verified with experimental measurements performed at the Operational Unit of Health Physics of the University of Bologna. The reconstructed spectra have been compared with the ones obtained with straightforward measurements, showing very good agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bivalve mollusk shells are useful tools for multi-species and multi-proxy paleoenvironmental reconstructions with a high temporal and spatial resolution. Past environmental conditions can be reconstructed from shell growth and stable oxygen and carbon isotope ratios, which present an archive for temperature, freshwater fluxes and primary productivity. The purpose of this thesis is the reconstruction of Holocene climate and environmental variations in the North Pacific with a high spatial and temporal resolution using marine bivalve shells. This thesis focuses on several different Holocene time periods and multiple regions in the North Pacific, including: Japan, Alaska (AK), British Columbia (BC) and Washington State, which are affected by the monsoon, Pacific Decadal Oscillation (PDO) and El Niño/Southern Oscillation (ENSO). Such high-resolution proxy data from the marine realm of mid- and high-latitudes are still rare. Therefore, this study contributes to the optimization and verification of climate models. However, before using bivalves for environmental reconstructions and seasonality studies, life history traits must be well studied to temporally align and interpret the geochemical record. These calibration studies are essential to ascertain the usefulness of selected bivalve species as paleoclimate proxy archives. This work focuses on two bivalve species, the short-lived Saxidomus gigantea and the long-lived Panopea abrupta. Sclerochronology and oxygen isotope ratios of different shell layers of P. abrupta were studied in order to test the reliability of this species as a climate archive. The annual increments are clearly discernable in umbonal shell portions and the increments widths should be measured in these shell portions. A reliable reconstruction of paleotemperatures may only be achieved by exclusively sampling the outer shell layer of multiple contemporaneous specimens. Life history traits (e.g., timing of growth line formation, duration of the growing season and growth rates) and stable isotope ratios of recent S. gigantea from AK and BC were analyzed in detail. Furthermore, a growth-temperature model based on S. gigantea shells from Alaska was established, which provides a better understanding of the hydrological changes related to the Alaska Coastal Current (ACC). This approach allows the independent measurement of water temperature and salinity from variations in the width of lunar daily growth increments of S. gigantea. Temperature explains 70% of the variability in shell growth. The model was calibrated and tested with modern shells and then applied to archaeological specimens. The time period between 988 and 1447 cal yrs BP was characterized by colder (~1-2°C) and much drier (2-5 PSU) summers, and a likely much slower flowing ACC than at present. In contrast, the summers during the time interval of 599-1014 cal yrs BP were colder (up to 3°C) and fresher (1-2 PSU) than today. The Aleutian Low may have been stronger and the ACC was probably flowing faster during this time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sweet sorghum, a C4 crop of tropical origin, is gaining momentum as a multipurpose feedstock to tackle the growing environmental, food and energy security demands. Under temperate climates sweet sorghum is considered as a potential bioethanol feedstock, however, being a relatively new crop in such areas its physiological and metabolic adaptability has to be evaluated; especially to the more frequent and severe drought spells occurring throughout the growing season and to the cold temperatures during the establishment period of the crop. The objective of this thesis was to evaluate some adaptive photosynthetic traits of sweet sorghum to drought and cold stress, both under field and controlled conditions. To meet such goal, a series of experiments were carried out. A new cold-tolerant sweet sorghum genotype was sown in rhizotrons of 1 m3 in order to evaluate its tolerance to progressive drought until plant death at young and mature stages. Young plants were able to retain high photosynthetic rate for 10 days longer than mature plants. Such response was associated to the efficient PSII down-regulation capacity mediated by light energy dissipation, closure of reaction centers (JIP-test parameters), and accumulation of glucose and sucrose. On the other hand, when sweet sorghum plants went into blooming stage, neither energy dissipation nor sugar accumulation counteracted the negative effect of drought. Two hybrids with contrastable cold tolerance, selected from an early sowing field trial were subjected to chilling temperatures under controlled growth conditions to evaluate in deep their physiological and metabolic cold adaptation mechanisms. The hybrid which poorly performed under field conditions (ICSSH31), showed earlier metabolic changes (Chl a + b, xanthophyll cycle) and greater inhibition of enzymatic activity (Rubisco and PEPcase activity) than the cold tolerant hybrid (Bulldozer). Important insights on the potential adaptability of sweet sorghum to temperate climates are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 3D human movement analysis performed using stereophotogrammetric systems and skin markers, bone pose can only be estimated in an indirect fashion. During a movement, soft tissue deformations make the markers move with respect to the underlying bone generating soft tissue artefact (STA). STA has devastating effects on bone pose estimation and its compensation remains an open question. The aim of this PhD thesis was to contribute to the solution of this crucial issue. Modelling STA using measurable trial-specific variables is a fundamental prerequisite for its removal from marker trajectories. Two STA model architectures are proposed. Initially, a thigh marker-level artefact model is presented. STA was modelled as a linear combination of joint angles involved in the movement. This model was calibrated using ex-vivo and in-vivo STA invasive measures. The considerable number of model parameters led to defining STA approximations. Three definitions were proposed to represent STA as a series of modes: individual marker displacements, marker-cluster geometrical transformations (MCGT), and skin envelope shape variations. Modes were selected using two criteria: one based on modal energy and another on the selection of modes chosen a priori. The MCGT allows to select either rigid or non-rigid STA components. It was also empirically demonstrated that only the rigid component affects joint kinematics, regardless of the non-rigid amplitude. Therefore, a model of thigh and shank STA rigid component at cluster-level was then defined. An acceptable trade-off between STA compensation effectiveness and number of parameters can be obtained, improving joint kinematics accuracy. The obtained results lead to two main potential applications: the proposed models can generate realistic STAs for simulation purposes to compare different skeletal kinematics estimators; and, more importantly, focusing only on the STA rigid component, the model attains a satisfactory STA reconstruction with less parameters, facilitating its incorporation in an pose estimator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.