946 resultados para finite difference time-domain analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A potentially powerful drive-by bridge inspection approach was proposed to inspect bridge conditions utilizing the vibrations of a test vehicle while it passes over the target bridge. This approach suffers from the effect of roadway surface roughness and two solutions were proposed in previous studies: one is to subtract the responses of two vehicles (time-domain method) before spectral analysis and the other one is to subtract the spectrum of one vehicle from that of the other (frequency-domain method). Although the two methods were verified theoretically and numerically, their practical effectiveness is still an open question.Furthermore, whether the outcome spectra processed by those methods could be used to detect potential bridge damage is of our interests. In this study, a laboratory experiment was carried out with a test tractor-trailer system and a scaled bridge. It was observed that, first, for practical applications, it would be preferable to apply the frequency-domain method, avoiding the need to meet a strict requirement in synchronizing the responses of the two trailers in time domain; second, the statistical pattern of the processed spectra in a specific frequency band could be an effective anomaly indicator incorporated in drive-by inspection methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-frequency time domain finite element scattering code using a combination of edge and piecewise constant elements on unstructured tetrahedral meshes is described. A comparison of computation with theory is given for scattering from a sphere. A parallel implementation making use of the bulk synchronous parallel (BSP) programming model is described in detail; a BSP performance model of the parallelized field calculation is derived and compared to timing measurements on up to 128 processors on a Cray T3D.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ce travail présente une modélisation rapide d’ordre élévé capable de modéliser une configuration rotorique en cage complète ou en grille, de reproduire les courants de barre et tenir compte des harmoniques d’espace. Le modèle utilise une approche combinée d’éléments finis avec les circuits-couplés. En effet, le calcul des inductances est réalisé avec les éléments finis, ce qui confère une précision avancée au modèle. Cette méthode offre un gain important en temps de calcul sur les éléments finis pour des simulations transitoires. Deux outils de simulation sont développés, un dans le domaine du temps pour des résolutions dynamiques et un autre dans le domaine des phaseurs dont une application sur des tests de réponse en fréquence à l’arrêt (SSFR) est également présentée. La méthode de construction du modèle est décrite en détail de même que la procédure de modélisation de la cage du rotor. Le modèle est validé par l’étude de machines synchrones: une machine de laboratoire de 5.4 KVA et un grand alternateur de 109 MVA dont les mesures expérimentales sont comparées aux résultats de simulation du modèle pour des essais tels que des tests à vide, des courts-circuits triphasés, biphasés et un test en charge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is to study sedimentation mechanism by mathematical modeling in access channels which are affected by tidal currents. The most important factor for recognizing sedimentation process in every water environment is the flow pattern of that environment. It is noteworthy that the flow pattern is affected by the geometry and the shape of the environment as well as the type of existing affects in area. The area under the study in this thesis is located in Bushehr Gulf and the access channels (inner and outer). The study utilizes the hydrodynamic modeling with unstructured triangular and non-overlapping grids, using the finite volume, From method analysis in two scale sizes: large scale (200 m to 7.5km) and small scale (50m to 7.5km) in two different time durations of 15 days and 3.5 days to obtain the flow patterns. The 2D governing equations used in the model are the Depth-Averaged Shallow Water Equations. Turbulence Modeling is required to calculate the Eddy Viscosity Coefficient using the Smagorinsky Model with coefficient of 0.3. In addition to the flow modeling in two different scales and the use of the data of 3.5 day tidal current modeling have been considered to study the effects of the sediments equilibrium in the area and the channels. This model is capable of covering the area which is being settled and eroded and to identify the effects of tidal current of these processes. The required data of the above mentioned models such as current and sediments data have been obtained by the measurements in Bushehr Gulf and the access channels which was one of the PSO's (Port and Shipping Organization) project-titled, "The Sedimentation Modeling in Bushehr Port" in 1379. Hydrographic data have been obtained from Admiralty maps (2003) and Cartography Organization (1378, 1379). The results of the modeling includes: cross shore currents in northern and north western coasts of Bushehr Gulf during the neap tide and also the same current in northern and north eastern coasts of the Gulf during the spring tide. These currents wash and carry fine particles (silt, clay, and mud) from the coastal bed of which are generally made of mud and clay with some silts. In this regard, the role of sediments in the islands of this area and the islands made of depot of dredged sediments should not be ignored. The result of using 3.5 day modeling is that the cross channels currents leads to settlement places in inner and outer channels in tidal period. In neap tide the current enters the channel from upside bend of the two channels and outer channel. Then it crosses the channel oblique in some places of the outer channel. Also the oblique currents or even almost perpendicular current from up slope of inner channel between No. 15 and No. 18 buoys interact between the parallel currents in the channel and made secondary oblique currents which exit as a down-slope current in the channel and causes deposit of sediments as well as settling the suspended sediments carried by these currents. In addition in outer channel the speed of parallel currents in the bend of the channel which is naturally deeper increases. Therefore, it leads to erosion and suspension of sediments in this area. The speed of suspended sediments carried by this current which is parallel to the channel axis decreases when they pass through the shallower part of the channel where it is in the buoys No.7 and 8 to 5 and 6 are located. Therefore, the suspended sediment settles and because of this process these places will be even shallower. Furthermore, the passing of oblique upstream leads to settlement of the sediments in the up-slope and has an additional effect on the process of decreasing the depth of these locations. On the contrary, in the down-slope channel, as the results of sediments and current modeling indicates the speed of current increases and the currents make the particles of down-slope channel suspended and be carried away. Thus, in a vast area of downstream of both channels, the sediments have settled. At the end of the neap tide, the process along with circulations in this area produces eddies which causes sedimentation in the area. During spring some parts of this active location for sedimentation will enter both channels in a reverse process. The above mentioned processes and the places of sedimentation and erosion in inner and outer channels are validated by the sediments equilibrium modeling. This model will be able to estimate the suspended, bed load and the boundary layer thickness in each point of both channels and in the modeled area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fatigue damage in the connections of single mast arm signal support structures is one of the primary safety concerns because collapse could result from fatigue induced cracking. This type of cantilever signal support structures typically has very light damping and excessively large wind-induced vibration have been observed. Major changes related to fatigue design were made in the 2001 AASHTO LRFD Specification for Structural Supports for Highway Signs, Luminaries, and Traffic Signals and supplemental damping devices have been shown to be promising in reducing the vibration response and thus fatigue load demand on mast arm signal support structures. The primary objective of this study is to investigate the effectiveness and optimal use of one type of damping devices termed tuned mass damper (TMD) in vibration response mitigation. Three prototype single mast arm signal support structures with 50-ft, 60-ft, and 70-ft respectively are selected for this numerical simulation study. In order to validate the finite element models for subsequent simulation study, analytical modeling of static deflection response of mast arm of the signal support structures was performed and found to be close to the numerical simulation results from beam element based finite element model. A 3-DOF dynamic model was then built using analytically derived stiffness matrix for modal analysis and time history analysis. The free vibration response and forced (harmonic) vibration response of the mast arm structures from the finite element model are observed to be in good agreement with the finite element analysis results. Furthermore, experimental test result from recent free vibration test of a full-scale 50-ft mast arm specimen in the lab is used to verify the prototype structure’s fundamental frequency and viscous damping ratio. After validating the finite element models, a series of parametric study were conducted to examine the trend and determine optimal use of tuned mass damper on the prototype single mast arm signal support structures by varying the following parameters: mass, frequency, viscous damping ratio, and location of TMD. The numerical simulation study results reveal that two parameters that influence most the vibration mitigation effectiveness of TMD on the single mast arm signal pole structures are the TMD frequency and its viscous damping ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Temporally-growing frontal meandering and occasional eddy-shedding is observed in the Brazil Current (BC) as it flows adjacent to the Brazilian Coast. No study of the dynamics of this phenomenon has been conducted to date in the region between 22 degrees S and 25 degrees S. Within this latitude range, the flow over the intermediate continental slope is marked by a current inversion at a depth that is associated with the Intermediate Western Boundary Current (IWBC). A time series analysis of 10-current-meter mooring data was used to describe a mean vertical profile for the BC-IWBC jet and a typical meander vertical structure. The latter was obtained by an empirical orthogonal function (EOF) analysis that showed a single mode explaining 82% of the total variance. This mode structure decayed sharply with depth, revealing that the meandering is much more vigorous within the BC domain than it is in the IWBC region. As the spectral analysis of the mode amplitude time series revealed no significant periods, we searched for dominant wavelengths. This search was done via a spatial EOF analysis on 51 thermal front patterns derived from digitized AVHRR images. Four modes were statistically significant at the 95% confidence level. Modes 3 and 4, which together explained 18% of the total variance, are associated with 266 and 338-km vorticity waves, respectively. With this new information derived from the data, the [Johns, W.E., 1988. One-dimensional baroclinically unstable waves on the Gulf Stream potential vorticity gradient near Cape Hatteras. Dyn. Atmos. Oceans 11, 323-350] one-dimensional quasi-geostrophic model was applied to the interpolated mean BC-IWBC jet. The results indicated that the BC system is indeed baroclinically unstable and that the wavelengths depicted in the thermal front analysis are associated with the most unstable waves produced by the model. Growth rates were about 0.06 (0.05) days(-1) for the 266-km (338-km) wave. Moreover, phase speeds for these waves were low compared to the surface BC velocity and may account for remarks in the literature about growing standing or stationary meanders off southeast Brazil. The theoretical vertical structure modes associated with these waves resembled very closely to the one obtained for the current-meter mooring EOF analysis. We interpret this agreement as a confirmation that baroclinic instability is an important mechanism in meander growth in the BC system. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Com a presente evolução das railguns na Marinha dos Estados Unidos da América e possível instalação em seus navios num futuro muito próximo, outras marinhas se seguirão. Creio que será do interesse da Marinha Portuguesa acompanhar esta evolução tecnológica, considerando as vantagens que advém da adoção deste tipo de armamento. Neste documento são abordados os princípios básicos subjacentes ao funcionamento da railgun, com principal foco nas questões eletrodinâmicas. Pretende-se adquirir familiaridade com este novo tipo de armamento através do estudo crítico dos seus princípios de funcionamento. O princípio básico de funcionamento de uma railgun, à primeira vista, parece bastante simples, à luz da aplicação imediata da expressão da força de Lorentz sobre um condutor percorrido por corrente elétrica. No entanto, tudo se torna mais complicado no caso de uma variação rápida dos parâmetros envolvidos (regime transitório), que exige uma análise mais aprofundada do comportamento da corrente, campos elétrico e magnético, e todos os materiais envolvidos neste sistema. Este trabalho envolveu ainda a construção de duas railguns, uma primeira de dimensões mais pequenas para ganhar familiaridade com o sistema, e uma última de dimensões de laboratório na qual foram feitos vários disparos para testar diferentes tipos de material e dimensões de projétil. Em suma, é demonstrado neste documento uma análise, no domínio do tempo, da distribuição espacial do campo eletromagnético, corrente elétrica e consequente fluxo de energia, complementados por uma parte experimental.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry’s standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device. This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.