973 resultados para Multi-view geometry
Resumo:
Negli ultimi anni le tecnologie informatiche sono state al centro di uno sviluppo esponenziale. Fra le incalcolabili innovazioni presentate, ha preso sempre più campo il paradigma per la programmazione ad agenti, che permette la realizzazione di sistemi software complessi, i quali, nell'informatica moderna, ricoprono un ruolo di fondamentale importanza. Questi sistemi, denominati autonomi, mostrano caratteristiche interessanti per scenari dinamici; essi infatti devono essere robusti e resistenti, in grado di adattarsi al contesto ambientale e quindi reagire a determinate modifiche che si verificano nell'ambiente, comportandosi di conseguenza. Indicano perciò la pro-attività dell'entità presa in considerazione. In questa tesi saranno spiegate queste tipologie di sistemi, introdotte le loro caratteristiche e mostrate le loro potenzialità. Tali caratteristiche permettono di responsabilizzare i soggetti, rendendo il sistema auto-organizzato, con una migliore scalabilità e modularità, riducendo quindi le elevate esigenze di calcolo. L'organizzazione di questo documento prevede i primi capitoli atti a introdurre il mondo dei sistemi autonomi, partendo dalle definizioni di autonomia e di agenti software, concludendo con i sistemi multi-agenti, allo scopo di permettere al lettore una comprensione adatta ed esaustiva. I successivi capitoli riguardano le fasi di progettazione delle entità prese in esame, le loro forme di standardizzazione e i modelli che possono adottare, tra i quali il più conosciuto, il modello BDI. Ne seguono due diverse metodologie per l'ingegneria del software orientata agli agenti. Si conclude con la presentazione dello stato dell'arte degli ambienti di sviluppo conosciuti, contenente un'esauriente introduzione ad ognuno di essi ed una visione nel mondo del lavoro del loro apporto negli applicativi in commercio. Infine la tesi terminerà con un capitolo di conclusioni e di riflessioni sui possibili aspetti futuri.
Resumo:
Uno dei temi più recenti nel campo delle telecomunicazioni è l'IoT. Tale termine viene utilizzato per rappresentare uno scenario nel quale non solo le persone, con i propri dispositivi personali, ma anche gli oggetti che le circondano saranno connessi alla rete con lo scopo di scambiarsi informazioni di diversa natura. Il numero sempre più crescente di dispositivi connessi in rete, porterà ad una richiesta maggiore in termini di capacità di canale e velocità di trasmissione. La risposta tecnologica a tali esigenze sarà data dall’avvento del 5G, le cui tecnologie chiave saranno: massive MIMO, small cells e l'utilizzo di onde millimetriche. Nel corso del tempo la crescita delle vendite di smartphone e di dispositivi mobili in grado di sfruttare la localizzazione per ottenere servizi, ha fatto sì che la ricerca in questo campo aumentasse esponenzialmente. L'informazione sulla posizione viene utilizzata infatti in differenti ambiti, si passa dalla tradizionale navigazione verso la meta desiderata al geomarketing, dai servizi legati alle chiamate di emergenza a quelli di logistica indoor per industrie. Data quindi l'importanza del processo di positioning, l'obiettivo di questa tesi è quello di ottenere la stima sulla posizione e sulla traiettoria percorsa da un utente che si muove in un ambiente indoor, sfruttando l'infrastruttura dedicata alla comunicazione che verrà a crearsi con l'avvento del 5G, permettendo quindi un abbattimento dei costi. Per fare ciò è stato implementato un algoritmo basato sui filtri EKF, nel quale il sistema analizzato presenta in ricezione un array di antenne, mentre in trasmissione è stato effettuato un confronto tra due casi: singola antenna ed array. Lo studio di entrambe le situazioni permette di evidenziare, quindi, i vantaggi ottenuti dall’utilizzo di sistemi multi antenna. Inoltre sono stati analizzati altri elementi chiave che determinano la precisione, quali geometria del sistema, posizionamento del ricevitore e frequenza operativa.
Resumo:
Progettazione, realizzazione ed analisi prestazionale e di robustezza di un sistema di cluster ad alta affidabilità basato su MariaDB Galera Cluster. Affiancamento al cluster di un sistema di proxy MaxScale. Studio e realizzazione di una procedura di migrazione da DBMS MySQL a MariaDB Galera Cluster.
Resumo:
In questa tesi viene trattato l'argomento dello sviluppo multi-platform di applicazioni mobile. Viene effettuata una panoramica degli approcci possibili e dei relativi framework per lo sviluppo. Individuato l'approccio ritenuto piu interessante, viene affrontato un caso di studio per poter convalidare la tecnologia.
Resumo:
In order to explore the genetic diversity within Echinococcus multilocularis (E. multilocularis), the cestode responsible for the alveolar echinococcosis (AE) in humans, a microsatellite, composed of (CA) and (GA) repeats and designated EmsB, was isolated and characterized in view of its nature and potential field application. PCR-amplification with specific primers exhibited a high degree of size polymorphism between E. multilocularis and Echinococcus granulosus sheep (G1) and camel (G6) strains. Fluorescent-PCR was subsequently performed on a panel of E. multilocularis isolates to assess intra-species polymorphism level. EmsB provided a multi-peak profile, characterized by tandemly repeated microsatellite sequences in the E. multilocularis genome. This "repetition of repeats" feature provided to EmsB a high discriminatory power in that eight clusters, supported by bootstrap p-values larger than 95%, could be defined among the tested E. multilocularis samples. We were able to differentiate not only the Alaskan from the European samples, but also to detect different European isolate clusters. In total, 25 genotypes were defined within 37 E. multilocularis samples. Despite its complexity, this tandem repeated multi-loci microsatellite possesses the three important features for a molecular marker, i.e. sensitivity, repetitiveness and discriminatory power. It will permit assessing the genetic polymorphism of E. multilocularis and to investigate its spatial distribution in detail.
Resumo:
Internal combustion engines are, and will continue to be, a primary mode of power generation for ground transportation. Challenges exist in meeting fuel consumption regulations and emission standards while upholding performance, as fuel prices rise, and resource depletion and environmental impacts are of increasing concern. Diesel engines are advantageous due to their inherent efficiency advantage over spark ignition engines; however, their NOx and soot emissions can be difficult to control and reduce due to an inherent tradeoff. Diesel combustion is spray and mixing controlled providing an intrinsic link between spray and emissions, motivating detailed, fundamental studies on spray, vaporization, mixing, and combustion characteristics under engine relevant conditions. An optical combustion vessel facility has been developed at Michigan Technological University for these studies, with detailed tests and analysis being conducted. In this combustion vessel facility a preburn procedure for thermodynamic state generation is used, and validated using chemical kinetics modeling both for the MTU vessel, and institutions comprising the Engine Combustion Network international collaborative research initiative. It is shown that minor species produced are representative of modern diesel engines running exhaust gas recirculation and do not impact the autoignition of n-heptane. Diesel spray testing of a high-pressure (2000 bar) multi-hole injector is undertaken including non-vaporizing, vaporizing, and combusting tests, with sprays characterized using Mie back scatter imaging diagnostics. Liquid phase spray parameter trends agree with literature. Fluctuations in liquid length about a quasi-steady value are quantified, along with plume to plume variations. Hypotheses are developed for their causes including fuel pressure fluctuations, nozzle cavitation, internal injector flow and geometry, chamber temperature gradients, and turbulence. These are explored using a mixing limited vaporization model with an equation of state approach for thermopyhysical properties. This model is also applied to single and multi-component surrogates. Results include the development of the combustion research facility and validated thermodynamic state generation procedure. The developed equation of state approach provides application for improving surrogate fuels, both single and multi-component, in terms of diesel spray liquid length, with knowledge of only critical fuel properties. Experimental studies are coupled with modeling incorporating improved thermodynamic non-ideal gas and fuel
Resumo:
Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.
Resumo:
During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.
Resumo:
Located in the northeastern region of Italy, the Venetian Plain (VP) is a sedimentary basin containing an extensively exploited groundwater system. The northern part is characterised by a large undifferentiated phreatic aquifer constituted by coarse grain alluvial deposits and recharged by local rainfalls and discharges from the rivers Brenta and Piave. The southern plain is characterised by a series of aquitards and sandy aquifers forming a well-defined artesian multi-aquifer system. In order to determine origins, transit times and mixing proportions of different components in groundwater (GW), a multi tracer study (H, He/He, C, CFC, SF, Kr, Ar, Sr/Sr, O, H, cations, and anions) has been carried out in VP between the rivers Brenta and Piave. The geochemical pattern of GW allows a distinction of the different water origins in the system, in particular based on View the MathML source HCO3-,SO42-,Ca/Mg,NO3-, O, H. A radiogenic Sr signature clearly marks GW originated from the Brenta and Tertiary catchments. End-member analysis and geochemical modelling highlight the existence of a mixing process involving waters recharged from the Brenta and Piave rivers, from the phreatic aquifer and from another GW reservoirs characterised by very low mineralization. Noble gas excesses in respect to atmospheric equilibrium occur in all samples, particularly in the deeper aquifers of the Piave river, but also in phreatic water of the undifferentiated aquifers. He–H ages in the phreatic aquifer and in the shallower level of the multi-aquifer system indicate recharge times in the years 1970–2008. The progression of H–He ages with the distance from the recharge areas together with initial tritium concentration (H + Hetrit) imply an infiltration rate of about 1 km/y and the absence of older components in these GW. SF and Kr data corroborate these conclusions. H − He ages in the deeper artesian aquifers suggest a dilution process with older, tritium free waters. C Fontes–Garnier model ages of the old GW components range from 1 to 12 ka, yielding an apparent GW velocity of about 1–10 m/y. Increase of radiogenic He follows the progression of C ages. Ar, radiogenic He and C tracers yield model-dependent age-ranges in overall good agreement once diffusion of C from aquitards, GW dispersion, lithogenic Ar production, and He production-rate heterogeneities are taken into account. The rate of radiogenic He increase with time, deduced by comparison with C model ages, is however very low compared to other studies. Comparison with C and C data obtained 40 years ago on the same aquifer system shows that exploitation of GW caused a significant loss of the old groundwater reservoir during this time.
Resumo:
Superresolution from plenoptic cameras or camera arrays is usually treated similarly to superresolution from video streams. However, the transformation between the low-resolution views can be determined precisely from camera geometry and parallax. Furthermore, as each low-resolution image originates from a unique physical camera, its sampling properties can also be unique. We exploit this option with a custom design of either the optics or the sensor pixels. This design makes sure that the sampling matrix of the complete system is always well-formed, enabling robust and high-resolution image reconstruction. We show that simply changing the pixel aspect ratio from square to anamorphic is sufficient to achieve that goal, as long as each camera has a unique aspect ratio. We support this claim with theoretical analysis and image reconstruction of real images. We derive the optimal aspect ratios for sets of 2 or 4 cameras. Finally, we verify our solution with a camera system using an anamorphic lens.
Resumo:
ATLAS measurements of the azimuthal anisotropy in lead–lead collisions at √sNN = 2.76 TeV are shown using a dataset of approximately 7μb−1 collected at the LHC in 2010. The measurements are performed for charged particles with transversemomenta 0.5 < pT < 20 GeV and in the pseudorapidity range |η| < 2.5. The anisotropy is characterized by the Fourier coefficients, vn, of the charged-particle azimuthal angle distribution for n = 2–4. The Fourier coefficients are evaluated using multi-particle cumulants calculated with the generating function method. Results on the transverse momentum, pseudorapidity and centrality dependence of the vn coefficients are presented. The elliptic flow, v2, is obtained from the two-, four-, six- and eight-particle cumulants while higher-order coefficients, v3 and v4, are determined with two- and four-particle cumulants. Flow harmonics vn measured with four-particle cumulants are significantly reduced compared to the measurement involving two-particle cumulants. A comparison to vn measurements obtained using different analysis methods and previously reported by the LHC experiments is also shown. Results of measurements of flow fluctuations evaluated with multiparticle cumulants are shown as a function of transverse momentum and the collision centrality. Models of the initial spatial geometry and its fluctuations fail to describe the flow fluctuations measurements.
Resumo:
Information-centric networking (ICN) is a promising approach for wireless communication because users can exploit the broadcast nature of the wireless medium to quickly find desired content at nearby nodes. However, wireless multi-hop communication is prone to collisions and it is crucial to quickly detect and react to them to optimize transmission times and a void spurious retransmissions. Several adaptive retransmission timers have been used in related ICN literature but they have not been compared and evaluated in wireless multi-hop environments. In this work, we evaluate existing algorithms in wireless multi-hop communication. We find that existing algorithms are not optimized for wireless communication but slight modificati ons can result in considerably better performance without increasing the number of transmitted Interests.
Resumo:
OBJECTIVE Cochlear implants (CIs) have become the gold standard treatment for deafness. These neuroprosthetic devices feature a linear electrode array, surgically inserted into the cochlea, and function by directly stimulating the auditory neurons located within the spiral ganglion, bypassing lost or not-functioning hair cells. Despite their success, some limitations still remain, including poor frequency resolution and high-energy consumption. In both cases, the anatomical gap between the electrode array and the spiral ganglion neurons (SGNs) is believed to be an important limiting factor. The final goal of the study is to characterize response profiles of SGNs growing in intimate contact with an electrode array, in view of designing novel CI devices and stimulation protocols, featuring a gapless interface with auditory neurons. APPROACH We have characterized SGN responses to extracellular stimulation using multi-electrode arrays (MEAs). This setup allows, in our view, to optimize in vitro many of the limiting interface aspects between CIs and SGNs. MAIN RESULTS Early postnatal mouse SGN explants were analyzed after 6-18 days in culture. Different stimulation protocols were compared with the aim to lower the stimulation threshold and the energy needed to elicit a response. In the best case, a four-fold reduction of the energy was obtained by lengthening the biphasic stimulus from 40 μs to 160 μs. Similarly, quasi monophasic pulses were more effective than biphasic pulses and the insertion of an interphase gap moderately improved efficiency. Finally, the stimulation with an external electrode mounted on a micromanipulator showed that the energy needed to elicit a response could be reduced by a factor of five with decreasing its distance from 40 μm to 0 μm from the auditory neurons. SIGNIFICANCE This study is the first to show electrical activity of SGNs on MEAs. Our findings may help to improve stimulation by and to reduce energy consumption of CIs and thereby contribute to the development of fully implantable devices with better auditory resolution in the future.
Resumo:
When proposing primary control (changing the world to fit self)/secondary control (changing self to fit the world) theory, Weisz et al. (1984) argued for the importance of the “serenity to accept the things I cannot change, the courage to change the things I can” (p. 967), and the wisdom to choose the right control strategy that fits the context. Although the dual processes of control theory generated hundreds of empirical studies, most of them focused on the dichotomy of PC and SC, with none of these tapped into the critical concept: individuals’ ability to know when to use what. This project addressed this issue by using scenario questions to study the impact of situationally adaptive control strategies on youth well-being. To understand the antecedents of youths’ preference for PC or SC, we also connected PCSC theory with Dweck’s implicit theory about the changeability of the world. We hypothesized that youths’ belief about the world’s changeability impacts how difficult it was for them to choose situationally adaptive control orientation, which then impacts their well-being. This study included adolescents and emerging adults between the ages of 18 and 28 years (Mean = 20.87 years) from the US (n = 98), China (n = 100), and Switzerland (n = 103). Participants answered a questionnaire including a measure of implicit theories about the fixedness of the external world, a scenario-based measure of control orientation, and several measures of well-being. Preliminary analyses of the scenario-based control orientation measures showed striking cross-cultural similarity of preferred control responses: while for three of the six scenarios primary control was the predominately chosen control response in all cultures, for the other three scenarios secondary control was the predominately chosen response. This suggested that youths across cultures are aware that some situations call for primary control, while others demand secondary control. We considered the control strategy winning the majority of the votes to be the strategy that is situationally adaptive. The results of a multi-group structural equation mediation model with the extent of belief in a fixed world as independent variable, the difficulties of carrying out the respective adaptive versus non-adaptive control responses as two mediating variables and the latent well-being variable as dependent variable showed a cross-culturally similar pattern of effects: a belief in a fixed world was significantly related to higher difficulties in carrying out the normative as well as the non-normative control response, but only the difficulty of carrying out the normative control response (be it primary control in situations where primary control is normative or secondary control in situations where secondary control is normative) was significantly related to a lower reported well-being (while the difficulty of carrying out the non-normative response was unrelated to well-being). While previous research focused on cross-cultural differences on the choice of PC or SC, this study shed light on the universal necessity of applying the right kind of control to fit the situation.
Resumo:
A high-resolution multi-proxy record from Lake Van, eastern Anatolia, derived from a lacustrine sequence cored at the 357 m deep Ahlat Ridge (AR), allows a comprehensive view of paleoclimate and environmental history in the continental Near East during the last interglacial (LI). We combined paleovegetation (pollen), stable oxygen isotope (d18Obulk) and XRF data from the same sedimentary sequence, showing distinct variations during the period from 135 to 110 ka ago leading into and out of full interglacial conditions. The last interglacial plateau, as defined by the presence of thermophilous steppe-forest communities, lasted ca. 13.5 ka, from ~129.1-115.6 ka BP. The detailed palynological sequence at Lake Van documents a vegetation succession with several climatic phases: (I) the Pistacia zone (ca. 131.2-129.1 ka BP) indicates summer dryness and mild winter conditions during the initial warming, (II) the Quercus-Ulmus zone (ca. 129.1-127.2 ka BP) occurred during warm and humid climate conditions with enhanced evaporation, (III) the Carpinus zone (ca. 127.2-124.1 ka BP) suggest increasingly cooler and wetter conditions, and (IV) the expansion of Pinus at ~124.1 ka BP marks the onset of a colder/drier environment that extended into the interval of global ice growth. Pollen data suggest migration of thermophilous trees from refugial areas at the beginning of the last interglacial. Analogous to the current interglacial, the migration documents a time lag between the onset of climatic amelioration and the establishment of an oak steppe-forest, spanning 2.1 ka. Hence, the major difference between the last interglacial compared to the current interglacial (Holocene) is the abundance of Pinus as well as the decrease of deciduous broad-leaved trees, indicating higher continentality during the last interglacial. Finally, our results demonstrate intra-interglacial variability in the low mid-latitudes and suggest a close connection with the high-frequency climate variability recorded in Greenland ice cores.