952 resultados para Robust epipolar-geometry estimation
Resumo:
Le pitture intumescenti sono utilizzate come protettivi passivi antincendio nel settore delle costruzioni. In particolare sono utilizzate per aumentare la resistenza al fuoco di elementi in acciaio. Le proprietà termiche di questi rivestimenti sono spesso sconosciute o difficili da stimare per via del fatto che variano notevolmente durante il processo di espansione che subisce l’intumescente quando esposto al calore di un incendio. Per questa ragione la validazione della resistenza al fuoco di un rivestimento presente in commercio si basa su metodi costosi economicamente e come tempi di esecuzione nel quale ciascuna trave e colonna rivestita di protettivo deve essere testata una alla volta attraverso il test di resistenza al fuoco della curva cellulosica. In questo lavoro di tesi adottando invece un approccio basato sulla modellazione termica del rivestimento intumescente si ottiene un aiuto nella semplificazione della procedura di test ed un supporto nella progettazione della resistenza al fuoco delle strutture. Il tratto di unione nei vari passaggi della presente tesi è stata la metodologia di stima del comportamento termico sconosciuto, tale metodologia di stima è la “Inverse Parameter Estimation”. Nella prima fase vi è stata la caratterizzazione chimico fisica della vernice per mezzo di differenti apparecchiature come la DSC, la TGA e l’FT-IR che ci hanno permesso di ottenere la composizione qualitativa e le temperature a cui avvengono i principali processi chimici e fisici che subisce la pittura come anche le entalpie legate a questi eventi. Nella seconda fase si è proceduto alla caratterizzazione termica delle pitture al fine di ottenerne il valore di conduttività termica equivalente. A tale scopo si sono prima utilizzate le temperature dell’acciaio di prove termiche alla fornace con riscaldamento secondo lo standard ISO-834 e successivamente per meglio definire le condizioni al contorno si è presa come fonte di calore un cono calorimetrico in cui la misura della temperatura avveniva direttamente nello spessore del’intumescente. I valori di conduttività ottenuti sono risultati congruenti con la letteratura scientifica e hanno mostrato la dipendenza della stessa dalla temperatura, mentre si è mostrata poco variante rispetto allo spessore di vernice deposto ed alla geometria di campione utilizzato.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
We investigate full-field detection-based maximum-likelihood sequence estimation (MLSE) for chromatic dispersion compensation in 10 Gbit/s OOK optical communication systems. Important design criteria are identified to optimize the system performance. It is confirmed that approximately 50% improvement in transmission reach can be achieved compared to conventional direct-detection MLSE at both 4 and 16 states. It is also shown that full-field MLSE is more robust to the noise and the associated noise amplifications in full-field reconstruction, and consequently exhibits better tolerance to nonoptimized system parameters than full-field feedforward equalizer. Experiments over 124 km spans of field-installed single-mode fiber without optical dispersion compensation using full-field MLSE verify the theoretically predicted performance benefits.
Resumo:
This paper is part of a project which aims to research the opportunities for the re-use of batteries after their primary use in low and ultra low carbon vehicles on the electricity grid system. One potential revenue stream is to provide primary/secondary/high frequency response to National Grid through market mechanisms via DNO's or Energy service providers. Some commercial battery energy storage systems (BESS) already exist on the grid system, but these tend to use costly new or high performance batteries. Second life batteries should be available at lower cost than new batteries but reliability becomes an important issue as individual batteries may suffer from degraded performance or failure. Therefore converter topology design could be used to influence the overall system reliability. A detailed reliability calculation of different single phase battery-to-grid converter interfacing schemes is presented. A suitable converter topology for robust and reliable BESS is recommended.
Resumo:
Along with other diseases that can affect binocular vision, reducing the visual quality of a subject, Congenital Nystagmus (CN) is of peculiar interest. CN is an ocular-motor disorder characterized by involuntary, conjugated ocular oscillations and, while identified more than forty years ago, its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. The majority of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recordings are routinely employed, allowing physicians to extract and analyze nystagmus main features such as waveform shape, amplitude and frequency. Use of eye movement recording, opportunely processed, allows computing "estimated visual acuity" predictors, which are analytical functions that estimate expected visual acuity using signal features such as foveation time and foveation position variability. Hence, it is fundamental to develop robust and accurate methods to measure both those parameters in order to obtain reliable values from the predictors. In this chapter the current methods to record eye movements in subjects with congenital nystagmus will be discussed and the present techniques to accurately compute foveation time and eye position will be presented. This study aims to disclose new methodologies in congenital nystagmus eye movements analysis, in order to identify nystagmus cycles and to evaluate foveation time, reducing the influence of repositioning saccades and data noise on the critical parameters of the estimation functions. Use of those functions extends the information acquired with typical visual acuity measurement (e.g., Landolt C test) and could be a support for treatment planning or therapy monitoring. © 2010 by Nova Science Publishers, Inc. All rights reserved.
Resumo:
We propose a robust adaptive time synchronization and frequency offset estimation method for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems by applying electrical dispersion pre-compensation (pre-EDC) to the pilot symbol. This technique effectively eliminates the timing error due to the fiber chromatic dispersion, thus increasing significantly the accuracy of the frequency offset estimation process and improving the overall system performance. In addition, a simple design of the pilot symbol is proposed for full-range frequency offset estimation. This pilot symbol can also be used to carry useful data to effectively reduce the overhead due to time synchronization by a factor of 2.
Resumo:
2000 Mathematics Subject Classification: 62J05, 62J10, 62F35, 62H12, 62P30.
Resumo:
2010 Mathematics Subject Classification: 60J80.
Resumo:
2010 Mathematics Subject Classification: 62F10, 62F12.
Resumo:
Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.
Resumo:
The aim of this research was to demonstrate a high current and stable field emission (FE) source based on carbon nanotubes (CNTs) and electron multiplier microchannel plate (MCP) and design efficient field emitters. In recent years various CNT based FE devices have been demonstrated including field emission displays, x-ray source and many more. However to use CNTs as source in high powered microwave (HPM) devices higher and stable current in the range of few milli-amperes to amperes is required. To achieve such high current we developed a novel technique of introducing a MCP between CNT cathode and anode. MCP is an array of electron multipliers; it operates by avalanche multiplication of secondary electrons, which are generated when electrons strike channel walls of MCP. FE current from CNTs is enhanced due to avalanche multiplication of secondary electrons and in addition MCP also protects CNTs from irreversible damage during vacuum arcing. Conventional MCP is not suitable for this purpose due to the lower secondary emission properties of their materials. To achieve higher and stable currents we have designed and fabricated a unique ceramic MCP consisting of high SEY materials. The MCP was fabricated utilizing optimum design parameters, which include channel dimensions and material properties obtained from charged particle optics (CPO) simulation. Child Langmuir law, which gives the optimum current density from an electron source, was taken into account during the system design and experiments. Each MCP channel consisted of MgO coated CNTs which was chosen from various material systems due to its very high SEY. With MCP inserted between CNT cathode and anode stable and higher emission current was achieved. It was ∼25 times higher than without MCP. A brighter emission image was also evidenced due to enhanced emission current. The obtained results are a significant technological advance and this research holds promise for electron source in new generation lightweight, efficient and compact microwave devices for telecommunications in satellites or space applications. As part of this work novel emitters consisting of multistage geometry with improved FE properties were was also developed.
Resumo:
The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.
Resumo:
ODP Site 1089 is optimally located in order to monitor the occurrence of maxima in Agulhas heat and salt spillage from the Indian to the Atlantic Ocean. Radiolarian-based paleotemperature transfer functions allowed to reconstruct the climatic history for the last 450 kyr at this location. A warm sea surface temperature anomaly during Marine Isotope Stage (MIS) 10 was recognized and traced to other oceanic records along the surface branch of the global thermohaline (THC) circulation system, and is particularly marked at locations where a strong interaction between oceanic and atmospheric overturning cells and fronts occurs. This anomaly is absent in the Vostok ice core deuterium, and in oceanic records from the Antarctic Zone. However, it is present in the deuterium excess record from the Vostok ice core, interpreted as reflecting the temperature at the moisture source site for the snow precipitated at Vostok Station. As atmospheric models predict a subtropical Indian source for such moisture, this provides the necessary teleconnection between East Antarctica and ODP Site 1089, as the subtropical Indian is also the source area of the Agulhas Current, the main climate agent at our study location. The presence of the MIS 10 anomaly in the delta13C foraminiferal records from the same core supports its connection to oceanic mechanisms, linking stronger Agulhas spillover intensity to increased productivity in the study area. We suggest, in analogy to modern oceanographic observations, this to be a consequence of a shallow nutricline, induced by eddy mixing and baroclinic tide generation, which are in turn connected to the flow geometry, and intensity, of the Agulhas Current as it flows past the Agulhas Bank. We interpret the intensified inflow of Agulhas Current to the South Atlantic as responding to the switch between lower and higher amplitude in the insolation forcing in the Agulhas Current source area. This would result in higher SSTs in the Cape Basin during the glacial MIS 10, due to the release into the South Atlantic of the heat previously accumulating in the subtropical and equatorial Indian and Pacific Ocean. If our explanation for the MIS 10 anomaly in terms of an insolation variability switch is correct, we might expect that a future Agulhas SSST anomaly event will further delay the onset of next glacial age. In fact, the insolation forcing conditions for the Holocene (the current interglacial) are very similar to those present during MIS 11 (the interglacial preceding MIS 10), as both periods are characterized by a low insolation variability for the Agulhas Current source area. Natural climatic variability will force the Earth system in the same direction as the anthropogenic global warming trend, and will thus lead to even warmer than expected global temperatures in the near future.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.