849 resultados para Density-based Scanning Algorithm
Resumo:
In this study we propose the use of the performance measure distribution rather than its punctual value to rank hedge funds. Generalized Sharpe Ratio and other similar measures that take into account the higher-order moments of portfolio return distributions are commonly used to evaluate hedge funds performance. The literature in this field has reported non-significant difference in ranking between performance measures that take, and those that do not take, into account higher moments of distribution. Our approach provides a much more powerful manner to differentiate between hedge funds performance. We use a non-semiparametric density based on Gram-Charlier expansions to forecast the conditional distribution of hedge fund returns and its corresponding performance measure distribution. Through a forecasting exercise we show the advantages of our technique in relation to using the more traditional punctual performance measures.
Resumo:
With applications ranging from aerospace to biomedicine, additive manufacturing (AM) has been revolutionizing the manufacturing industry. The ability of additive techniques, such as selective laser melting (SLM), to create fully functional, geometrically complex, and unique parts out of high strength materials is of great interest. Unfortunately, despite numerous advantages afforded by this technology, its widespread adoption is hindered by a lack of on-line, real time feedback control and quality assurance techniques. In this thesis, inline coherent imaging (ICI), a broadband, spatially coherent imaging technique, is used to observe the SLM process in 15 - 45 $\mu m$ 316L stainless steel. Imaging of both single and multilayer builds is performed at a rate of 200 $kHz$, with a resolution of tens of microns, and a high dynamic range rendering it impervious to blinding from the process beam. This allows imaging before, during, and after laser processing to observe changes in the morphology and stability of the melt. Galvanometer-based scanning of the imaging beam relative to the process beam during the creation of single tracks is used to gain a unique perspective of the SLM process that has been so far unobservable by other monitoring techniques. Single track processing is also used to investigate the possibility of a preliminary feedback control parameter based on the process beam power, through imaging with both coaxial and 100 $\mu m$ offset alignment with respect to the process beam. The 100 $\mu m$ offset improved imaging by increasing the number of bright A-lines (i.e. with signal greater than the 10 $dB$ noise floor) by 300\%. The overlap between adjacent tracks in a single layer is imaged to detect characteristic fault signatures. Full multilayer builds are carried out and the resultant ICI images are used to detect defects in the finished part and improve upon the initial design of the build system. Damage to the recoater blade is assessed using powder layer scans acquired during a 3D build. The ability of ICI to monitor SLM processes at such high rates with high resolution offers extraordinary potential for future advances in on-line feedback control of additive manufacturing.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Resumo:
In order to power our planet for the next century, clean energy technologies need to be developed and deployed. Photovoltaic solar cells, which convert sunlight into electricity, are a clear option; however, they currently supply 0.1% of the US electricity due to the relatively high cost per Watt of generation. Thus, our goal is to create more power from a photovoltaic device, while simultaneously reducing its price. To accomplish this goal, we are creating new high efficiency anti-reflection coatings that allow more of the incident sunlight to be converted to electricity, using simple and inexpensive coating techniques that enable reduced manufacturing costs. Traditional anti-reflection coatings (consisting of thin layers of non-absorbing materials) rely on the destructive interference of the reflected light, causing more light to enter the device and subsequently get absorbed. While these coatings are used on nearly all commercial cells, they are wavelength dependent and are deposited using expensive processes that require elevated temperatures, which increase production cost and can be detrimental to some temperature sensitive solar cell materials. We are developing two new classes of anti-reflection coatings (ARCs) based on textured dielectric materials: (i) a transparent, flexible paper technology that relies on optical scattering and reduced refractive index contrast between the air and semiconductor and (ii) silicon dioxide (SiO2) nanosphere arrays that rely on collective optical resonances. Both techniques improve solar cell absorption and ultimately yield high efficiency, low cost devices. For the transparent paper-based ARCs, we have recently shown that they improve solar cell efficiencies for all angles of incident illumination reducing the need for costly tracking of the sun’s position. For a GaAs solar cell, we achieved a 24% improvement in the power conversion efficiency using this simple coating. Because the transparent paper is made from an earth abundant material (wood pulp) using an easy, inexpensive and scalable process, this type of ARC is an excellent candidate for future solar technologies. The coatings based on arrays of dielectric nanospheres also show excellent potential for inexpensive, high efficiency solar cells. The fabrication process is based on a Meyer rod rolling technique, which can be performed at room-temperature and applied to mass production, yielding a scalable and inexpensive manufacturing process. The deposited monolayer of SiO2 nanospheres, having a diameter of 500 nm on a bare Si wafer, leads to a significant increase in light absorption and a higher expected current density based on initial simulations, on the order of 15-20%. With application on a Si solar cell containing a traditional anti-reflection coating (Si3N4 thin-film), an additional increase in the spectral current density is observed, 5% beyond what a typical commercial device would achieve. Due to the coupling between the spheres originated from Whispering Gallery Modes (WGMs) inside each nanosphere, the incident light is strongly coupled into the high-index absorbing material, leading to increased light absorption. Furthermore, the SiO2 nanospheres scatter and diffract light in such a way that both the optical and electrical properties of the device have little dependence on incident angle, eliminating the need for solar tracking. Because the layer can be made with an easy, inexpensive, and scalable process, this anti-reflection coating is also an excellent candidate for replacing conventional technologies relying on complicated and expensive processes.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
Peer reviewed
Resumo:
Biochemical reactions underlying genetic regulation are often modelled as a continuous-time, discrete-state, Markov process, and the evolution of the associated probability density is described by the so-called chemical master equation (CME). However the CME is typically difficult to solve, since the state-space involved can be very large or even countably infinite. Recently a finite state projection method (FSP) that truncates the state-space was suggested and shown to be effective in an example of a model of the Pap-pili epigenetic switch. However in this example, both the model and the final time at which the solution was computed, were relatively small. Presented here is a Krylov FSP algorithm based on a combination of state-space truncation and inexact matrix-vector product routines. This allows larger-scale models to be studied and solutions for larger final times to be computed in a realistic execution time. Additionally the new method computes the solution at intermediate times at virtually no extra cost, since it is derived from Krylov-type methods for computing matrix exponentials. For the purpose of comparison the new algorithm is applied to the model of the Pap-pili epigenetic switch, where the original FSP was first demonstrated. Also the method is applied to a more sophisticated model of regulated transcription. Numerical results indicate that the new approach is significantly faster and extendable to larger biological models.
Resumo:
Estimation of the far-field centre is carried out in beam auto-alignment. In this paper, the features of the far-field of a square beam are presented. Based on these features, a phase-only matched filter is designed, and the algorithm of centre estimation is developed. Using the simulated images with different kinds of noise and the 40 test images that are taken in sequence, the accuracy of this algorithm is estimated. Results show that the error is no more than one pixel for simulated noise images with a 99% probability, and the stability is restricted within one pixel for test images. Using the improved algorithm, the consumed time is reduced to 0.049 s.
Resumo:
A highly sensitive nonenzymatic amperometric glucose sensor was fabricated by using Ni nanoparticles homogeneously dispersed within and on the top of a vertically aligned CNT forest (CNT/Ni nanocomposite sensor), which was directly grown on a Si/SiO2 substrate. The surface morphology and elemental analysis were characterized using scanning electron microscopy and energy dispersive spectroscopy, respectively. Cyclic voltammetry and chronoamperometry were used to evaluate the catalytic activities of CNT/Ni electrode. The CNT/Ni nanocomposite sensor exhibited a great enhancement of anodic peak current after adding 5 mM glucose in alkaline solution. The sensor can also be applied to the quantification of glucose content with a linear range covering from 5 μM to 7 mM, a high sensitivity of 1433 μA mM-1 cm-2, and a low detection limit of 2 μM. The CNT/Ni nanocomposite sensor exhibits good reproducibility and long-term stability, moreover, it was also relatively insensitive to commonly interfering species, such as uric acid, ascorbic acid, acetaminophen, sucrose and d-fructose. © 2013 Elsevier B.V.
Resumo:
A high-efficiency nanoelectrocatalyst based on high-density Au/Pt hybrid nanoparticles supported on a silica nanosphere (Au-Pt/SiO2) has been prepared by a facile wet chemical method. Scanning electron microscopy, transmission electron microscopy, energy-dispersive X-ray spectroscopy, and X-ray photoelectron spectroscopy are employed to characterize the obtained Au-Pt/SiO2. It was found that each hybrid nanosphere is composed of high-density small Au/Pt hybrid nanoparticles with rough surfaces. These small Au/Pt hybrid nanoparticles interconnect and form a porous nanostructure, which provides highly accessible activity sites, as required for high electrocatalytic activity. We suggest that the particular morphology of the AuPt/SiO2 may be the reason for the high catalytic activity. Thus, this hybrid nanomaterial may find a potential application in fuel cells.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
Previous research based on theoretical simulations has shown the potential of the wavelet transform to detect damage in a beam by analysing the time-deflection response due to a constant moving load. However, its application to identify damage from the response of a bridge to a vehicle raises a number of questions. Firstly, it may be difficult to record the difference in the deflection signal between a healthy and a slightly damaged structure to the required level of accuracy and high scanning frequencies in the field. Secondly, the bridge is going to have a road profile and it will be loaded by a sprung vehicle and time-varying forces rather than a constant load. Therefore, an algorithm based on a plot of wavelet coefficients versus time to detect damage (a singularity in the plot) appears to be very sensitive to noise. This paper addresses these questions by: (a) using the acceleration signal, instead of the deflection signal, (b) employing a vehicle-bridge finite element interaction model, and (c) developing a novel wavelet-based approach using wavelet energy content at each bridge section which proves to be more sensitive to damage than a wavelet coefficient line plot at a given scale as employed by others.
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.