884 resultados para Computational Geometry and Object Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosol particles and water vapour are two important constituents of the atmosphere. Their interaction, i.e. thecondensation of water vapour on particles, brings about the formation of cloud, fog, and raindrops, causing the water cycle on the earth, and being responsible for climate changes. Understanding the roles of water vapour and aerosol particles in this interaction has become an essential part of understanding the atmosphere. In this work, the heterogeneous nucleation on pre-existing aerosol particles by the condensation of water vapour in theflow of a capillary nozzle was investigated. Theoretical and numerical modelling as well as experiments on thiscondensation process were included. Based on reasonable results from the theoretical and numerical modelling, an idea of designing a new nozzle condensation nucleus counter (Nozzle-CNC), that is to utilise the capillary nozzle to create an expanding water saturated air flow, was then put forward and various experiments were carried out with this Nozzle-CNC under different experimental conditions. Firstly, the air stream in the long capillary nozzle with inner diameter of 1.0~mm was modelled as a steady, compressible and heat-conducting turbulence flow by CFX-FLOW3D computational program. An adiabatic and isentropic cooling in the nozzle was found. A supersaturation in the nozzle can be created if the inlet flow is water saturated, and its value depends principally on flow velocity or flow rate through the nozzle. Secondly, a particle condensational growth model in air stream was developed. An extended Mason's diffusion growthequation with size correction for particles beyond the continuum regime and with the correction for a certain particle Reynolds number in an accelerating state was given. The modelling results show the rapid condensational growth of aerosol particles, especially for fine size particles, in the nozzle stream, which, on the one hand, may induce evident `over-sizing' and `over-numbering' effects in aerosol measurements as nozzle designs are widely employed for producing accelerating and focused aerosol beams in aerosol instruments like optical particle counter (OPC) and aerodynamical particle sizer (APS). It can, on the other hand, be applied in constructing the Nozzle-CNC. Thirdly, based on the optimisation of theoretical and numerical results, the new Nozzle-CNC was built. Under various experimental conditions such as flow rate, ambient temperature, and the fraction of aerosol in the total flow, experiments with this instrument were carried out. An interesting exponential relation between the saturation in the nozzle and the number concentration of atmospheric nuclei, including hygroscopic nuclei (HN), cloud condensation nuclei (CCN), and traditionally measured atmospheric condensation nuclei (CN), was found. This relation differs from the relation for the number concentration of CCN obtained by other researchers. The minimum detectable size of this Nozzle-CNC is 0.04?m. Although further improvements are still needed, this Nozzle-CNC, in comparison with other CNCs, has severaladvantages such as no condensation delay as particles larger than the critical size grow simultaneously, low diffusion losses of particles, little water condensation at the inner wall of the instrument, and adjustable saturation --- therefore the wide counting region, as well as no calibration compared to non-water condensation substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we present some combinatorial optimization problems, suggest models and algorithms for their effective solution. For each problem,we give its description, followed by a short literature review, provide methods to solve it and, finally, present computational results and comparisons with previous works to show the effectiveness of the proposed approaches. The considered problems are: the Generalized Traveling Salesman Problem (GTSP), the Bin Packing Problem with Conflicts(BPPC) and the Fair Layout Problem (FLOP).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Northern Apennines (NA) chain is the expression of the active plate margin between Europe and Adria. Given the low convergence rates and the moderate seismic activity, ambiguities still occur in defining a seismotectonic framework and many different scenarios have been proposed for the mountain front evolution. Differently from older models that indicate the mountain front as an active thrust at the surface, a recently proposed scenario describes the latter as the frontal limb of a long-wavelength fold (> 150 km) formed by a thrust fault tipped around 17 km at depth, and considered as the active subduction boundary. East of Bologna, this frontal limb is remarkably very straight and its surface is riddled with small, but pervasive high- angle normal faults. However, west of Bologna, some recesses are visible along strike of the mountain front: these perturbations seem due to the presence of shorter wavelength (15 to 25 km along strike) structures showing both NE and NW-vergence. The Pleistocene activity of these structures was already suggested, but not quantitative reconstructions are available in literature. This research investigates the tectonic geomorphology of the NA mountain front with the specific aim to quantify active deformations and infer possible deep causes of both short- and long-wavelength structures. This study documents the presence of a network of active extensional faults, in the foothills south and east of Bologna. For these structures, the strain rate has been measured to find a constant throw-to-length relationship and the slip rates have been compared with measured rates of erosion. Fluvial geomorphology and quantitative analysis of the topography document in detail the active tectonics of two growing domal structures (Castelvetro - Vignola foothills and the Ghiardo plateau) embedded in the mountain front west of Bologna. Here, tilting and river incision rates (interpreted as that long-term uplift rates) have been measured respectively at the mountain front and in the Enza and Panaro valleys, using a well defined stratigraphy of Pleistocene to Holocene river terraces and alluvial fan deposits as growth strata, and seismic reflection profiles relationships. The geometry and uplift rates of the anticlines constrain a simple trishear fault propagation folding model that inverts for blind thrust ramp depth, dip, and slip. Topographic swath profiles and the steepness index of river longitudinal profiles that traverse the anti- clines are consistent with stratigraphy, structures, aquifer geometry, and seismic reflection profiles. Available focal mechanisms of earthquakes with magnitude between Mw 4.1 to 5.4, obtained from a dataset of the instrumental seismicity for the last 30 years, evidence a clear vertical separation at around 15 km between shallow extensional and deeper compressional hypocenters along the mountain front and adjacent foothills. In summary, the studied anticlines appear to grow at rates slower than the growing rate of the longer- wavelength structure that defines the mountain front of the NA. The domal structures show evidences of NW-verging deformation and reactivations of older (late Neogene) thrusts. The reconstructed river incision rates together with rates coming from several other rivers along a 250 km wide stretch of the NA mountain front and recently available in the literature, all indicate a general increase from Middle to Late Pleistocene. This suggests focusing of deformation along a deep structure, as confirmed by the deep compressional seismicity. The maximum rate is however not constant along the mountain front, but varies from 0.2 mm/yr in the west to more than 2.2 mm/yr in the eastern sector, suggesting a similar (eastward-increasing) trend of the apenninic subduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Persistent Topology is an innovative way of matching topology and geometry, and it proves to be an effective mathematical tool in shape analysis. In order to express its full potential for applications, it has to interface with the typical environment of Computer Science: It must be possible to deal with a finite sampling of the object of interest, and with combinatorial representations of it. Following that idea, the main result claims that it is possible to construct a relation between the persistent Betti numbers (PBNs; also called rank invariant) of a compact, Riemannian submanifold X of R^m and the ones of an approximation U of X itself, where U is generated by a ball covering centered in the points of the sampling. Moreover we can state a further result in which, this time, we relate X with a finite simplicial complex S generated, thanks to a particular construction, by the sampling points. To be more precise, strict inequalities hold only in "blind strips'', i.e narrow areas around the discontinuity sets of the PBNs of U (or S). Out of the blind strips, the values of the PBNs of the original object, of the ball covering of it, and of the simplicial complex coincide, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis I treat various biophysical questions arising in the context of complexed / ”protein-packed” DNA and DNA in confined geometries (like in viruses or toroidal DNA condensates). Using diverse theoretical methods I consider the statistical mechanics as well as the dynamics of DNA under these conditions. In the first part of the thesis (chapter 2) I derive for the first time the single molecule ”equation of state”, i.e. the force-extension relation of a looped DNA (Eq. 2.94) by using the path integral formalism. Generalizing these results I show that the presence of elastic substructures like loops or deflections caused by anchoring boundary conditions (e.g. at the AFM tip or the mica substrate) gives rise to a significant renormalization of the apparent persistence length as extracted from single molecule experiments (Eqs. 2.39 and 2.98). As I show the experimentally observed apparent persistence length reduction by a factor of 10 or more is naturally explained by this theory. In chapter 3 I theoretically consider the thermal motion of nucleosomes along a DNA template. After an extensive analysis of available experimental data and theoretical modelling of two possible mechanisms I conclude that the ”corkscrew-motion” mechanism most consistently explains this biologically important process. In chapter 4 I demonstrate that DNA-spools (architectures in which DNA circumferentially winds on a cylindrical surface, or onto itself) show a remarkable ”kinetic inertness” that protects them from tension-induced disruption on experimentally and biologically relevant timescales (cf. Fig. 4.1 and Eq. 4.18). I show that the underlying model establishes a connection between the seemingly unrelated and previously unexplained force peaks in single molecule nucleosome and DNA-toroid stretching experiments. Finally in chapter 5 I show that toroidally confined DNA (found in viruses, DNAcondensates or sperm chromatin) undergoes a transition to a twisted, highly entangled state provided that the aspect ratio of the underlying torus crosses a certain critical value (cf. Eq. 5.6 and the phase diagram in Fig. 5.4). The presented mechanism could rationalize several experimental mysteries, ranging from entangled and supercoiled toroids released from virus capsids to the unexpectedly short cholesteric pitch in the (toroidaly wound) sperm chromatin. I propose that the ”topological encapsulation” resulting from our model may have some practical implications for the gene-therapeutic DNA delivery process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is based on the integration of traditional and innovative approaches aimed at improving the normal faults seimogenic identification and characterization, focusing mainly on slip-rate estimate as a measure of the fault activity. The L’Aquila Mw 6.3 April 6, 2009 earthquake causative fault, namely the Paganica - San Demetrio fault system (PSDFS), was used as a test site. We developed a multidisciplinary and scale‐based strategy consisting of paleoseismological investigations, detailed geomorphological and geological field studies, as well as shallow geophysical imaging and an innovative application of physical properties measurements. We produced a detailed geomorphological and geological map of the PSDFS, defining its tectonic style, arrangement, kinematics, extent, geometry and internal complexities. The PSDFS is a 19 km-long tectonic structure, characterized by a complex structural setting and arranged in two main sectors: the Paganica sector to the NW, characterized by a narrow deformation zone, and the San Demetrio sector to SE, where the strain is accommodated by several tectonic structures, exhuming and dissecting a wide Quaternary basin, suggesting the occurrence of strain migration through time. The integration of all the fault displacement data and age constraints (radiocarbon dating, optically stimulated luminescence (OSL) and tephrochronology) helped in calculating an average Quaternary slip-rate representative for the PSDFS of 0.27 - 0.48 mm/yr. On the basis of its length (ca. 20 km) and slip per event (up to 0.8 m) we also estimated a max expected Magnitude of 6.3-6.8 for this fault. All these topics have a significant implication in terms of surface faulting hazard in the area and may contribute also to the understanding of the PSDFS seismic behavior and of the local seismic hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Präsentiert wird ein vollständiger, exakter und effizienter Algorithmus zur Berechnung des Nachbarschaftsgraphen eines Arrangements von Quadriken (Algebraische Flächen vom Grad 2). Dies ist ein wichtiger Schritt auf dem Weg zur Berechnung des vollen 3D Arrangements. Dabei greifen wir auf eine bereits existierende Implementierung zur Berechnung der exakten Parametrisierung der Schnittkurve von zwei Quadriken zurück. Somit ist es möglich, die exakten Parameterwerte der Schnittpunkte zu bestimmen, diese entlang der Kurven zu sortieren und den Nachbarschaftsgraphen zu berechnen. Wir bezeichnen unsere Implementierung als vollständig, da sie auch die Behandlung aller Sonderfälle wie singulärer oder tangentialer Schnittpunkte einschließt. Sie ist exakt, da immer das mathematisch korrekte Ergebnis berechnet wird. Und schließlich bezeichnen wir unsere Implementierung als effizient, da sie im Vergleich mit dem einzigen bisher implementierten Ansatz gut abschneidet. Implementiert wurde unser Ansatz im Rahmen des Projektes EXACUS. Das zentrale Ziel von EXACUS ist es, einen Prototypen eines zuverlässigen und leistungsfähigen CAD Geometriekerns zu entwickeln. Obwohl wir das Design unserer Bibliothek als prototypisch bezeichnen, legen wir dennoch größten Wert auf Vollständigkeit, Exaktheit, Effizienz, Dokumentation und Wiederverwendbarkeit. Über den eigentlich Beitrag zu EXACUS hinaus, hatte der hier vorgestellte Ansatz durch seine besonderen Anforderungen auch wesentlichen Einfluss auf grundlegende Teile von EXACUS. Im Besonderen hat diese Arbeit zur generischen Unterstützung der Zahlentypen und der Verwendung modularer Methoden innerhalb von EXACUS beigetragen. Im Rahmen der derzeitigen Integration von EXACUS in CGAL wurden diese Teile bereits erfolgreich in ausgereifte CGAL Pakete weiterentwickelt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is devoted to the study of the properties of high-redsfhit galaxies in the epoch 1 < z < 3, when a substantial fraction of galaxy mass was assembled, and when the evolution of the star-formation rate density peaked. Following a multi-perspective approach and using the most recent and high-quality data available (spectra, photometry and imaging), the morphologies and the star-formation properties of high-redsfhit galaxies were investigated. Through an accurate morphological analyses, the built up of the Hubble sequence was placed around z ~ 2.5. High-redshift galaxies appear, in general, much more irregular and asymmetric than local ones. Moreover, the occurrence of morphological k-­correction is less pronounced than in the local Universe. Different star-formation rate indicators were also studied. The comparison of ultra-violet and optical based estimates, with the values derived from infra-red luminosity showed that the traditional way of addressing the dust obscuration is problematic, at high-redshifts, and new models of dust geometry and composition are required. Finally, by means of stacking techniques applied to rest-frame ultra-violet spectra of star-forming galaxies at z~2, the warm phase of galactic-scale outflows was studied. Evidence was found of escaping gas at velocities of ~ 100 km/s. Studying the correlation of inter-­stellar absorption lines equivalent widths with galaxy physical properties, the intensity of the outflow-related spectral features was proven to depend strongly on a combination of the velocity dispersion of the gas and its geometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden zwei Arten von nicht-kovalent verknüpften Netzwerkstrukturen vorgestellt, die aus phosphonsäurehaltigen Molekülen aufgebaut sind. Einerseits sollen diese phosphonsäurehaltigen Moleküle als Protonenleiter in Brennstoffzellen eingesetzt werden. Dies ist durch die Möglichkeit des kooperativen Protonentransports in wasserstoffbrückenhaltigen Netzwerken begründet. Auf der anderen Seite sollen die phosphonsäurehaltigen Moleküle unter Einsatz von Metallkationen zur Darstellung ionischer Netzwerke verwendet werden. In diesem Fall fungieren die phosphonierten Moleküle als Linker in porösen organisch-anorganischen Hybridmaterialien, die sich beispielsweise zur Gasspeicherung eignen.rnEine Brennstoffzelle stellt Energie mit hoher Effizienz und geringer Umweltbelastung bereit. Das Herzstück der Brennstoffzelle ist die Elektrolytmembran, die auch als Separator oder Protonenaustauschmembran (PEM) bezeichnet wird. Es wird davon ausgegangen, daß der Schlüssel zur Weiterentwicklung der PEM-Brennstoffzellen in der Entwicklung von Elektrolyten liegt, die ausschließlich und effizient Protonen transportieren und darüber hinaus chemisch (oxidationsbeständig) und mechanisch stabil sind. Die mechanische Stabilität betrifft insbesondere den Betrieb der Brennstoffzelle bei hohen Temperaturen und niedriger relativer Feuchtigkeit. In dieser Arbeit wird ein neuartiger Ansatz zum Erreichen eines hohen Protonentransports im Festkörper vorgestellt, der auf dem Einsatz kleiner Moleküle beruht, die durch Selbstorganisation eine kontinuierliche protonenleitende Phase erzeugen. Bis jetzt stellt Hexakis(p-phosphonatophenyl)benzol das erste Beispiel eines kristallinen Protonenleiters dar, der im festen Zustand eine hohe und konstante Leistung zeigt. Die Modifizierung von Hexakis(p-phosphonatophenyl)benzol, entweder durch Änderung von para- zu meta-Substitution oder die Einführung von Alkylketten, führt zu Verbindungen geringerer Kristallinität und niedriger Protonenleitfähigkeit.rnIm zweiten Teil der Arbeit wurde 1,3,5-Tris(p-phosphonatophenyl)benzol als Linker in der Synthese von offenen Phosphonat-Netzwerken eingesetzt. Es bilden sich aufgrund der ionischen Wechselwirkung zwischen den positiv geladenen Metallkationen und den negativ geladenen Phosphonsäuregruppen hochstabile Feststoffe. Eines der wichtigsten Ergebnisse dieser Arbeit besteht darin, daß 1,3,5-Tris(p-phosphonatophenyl)benzol als Linker zum Aufbau poröser Hybridmaterialien eingesetzt werden kann. Zum ersten Mal wurde ein dreifach phosphoniertes organisches Molekül zum Aufbau mikroporöser offener Phosphonat-Netzwerke verwendet. Zudem konnte gezeigt werden, daß die Porosität mit dem Wachstumsmechanismus dieser Materialien zusammenhängt. Es ist nur dann möglich ein gleichfalls mikroporöses und kristallines ionisches Netzwerk auf der Grundlage phosphonierter Moleküle zu erhalten, wenn Linker und Konnektor die gleiche Geometrie und Funktionalität besitzen.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flow features inside centrifugal compressor stages are very complicated to simulate with numerical tools due to the highly complex geometry and varying gas conditions all across the machine. For this reason, a big effort is currently being made to increase the fidelity of the numerical models during the design and validation phases. Computational Fluid Dynamics (CFD) plays an increasing role in the assessment of the performance prediction of centrifugal compressor stages. Historically, CFD was considered reliable for performance prediction on a qualitatively level, whereas tests were necessary to predict compressors performance on a quantitatively basis. In fact "standard" CFD with only the flow-path and blades included into the computational domain is known to be weak in capturing efficiency level and operating range accurately due to the under-estimation of losses and the lack of secondary flows modeling. This research project aims to fill the gap in accuracy between "standard" CFD and tests data by including a high fidelity reproduction of the gas domain and the use of advanced numerical models and tools introduced in the author's OEM in-house CFD code. In other words, this thesis describes a methodology by which virtual tests can be conducted on single stages and multistage centrifugal compressors in a similar fashion to a typical rig test that guarantee end users to operate machines with a confidence level not achievable before. Furthermore, the new "high fidelity" approach allowed understanding flow phenomena not fully captured before, increasing aerodynamicists capability and confidence in designing high efficiency and high reliable centrifugal compressor stages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.