948 resultados para Direct numerical simulation
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
In this Letter, we analyze the near-field diffraction pattern produced by chirped gratings. An intuitive analytical interpretation of the generated diffraction orders is proposed. Several interesting properties of the near-field diffraction pattern can be determined, such as the period of the fringes and its visibility. Diffraction orders present different widths and also, some of them present focusing properties. The width, location, and depth of focus of the converging diffraction orders are also determined. The analytical expressions are compared to numerical simulation and experimental results, showing a high agreement.
Resumo:
We propose and experimentally demonstrate a refractive index (RI) sensor based on cascaded microfiber knot resonators (CMKRs) with Vernier effect. Deriving from high proportional evanescent field of microfiber and spectrum magnification function of Vernier effect, the RI sensor shows high sensitivity as well as high detection resolution. By using the method named "Drawing-Knotting-Assembling (DKA)", a compact CMKRs is fabricated for experimental demonstration. With the assistance of Lorentz fitting algorithm on the transmission spectrum, sensitivity of 6523nm/RIU and detection resolution up to 1.533 x 10-7 RIU are obtained in the experiment which show good agreement with the numerical simulation. The proposed all-fiber RI sensor with high sensitivity, compact size and low cost can be widely used for chemical and biological detection, as well as the electronic/magnetic field measurement
Resumo:
We study a small circuit of coupled nonlinear elements to investigate general features of signal transmission through networks. The small circuit itself is perceived as building block for larger networks. Individual dynamics and coupling are motivated by neuronal systems: We consider two types of dynamical modes for an individual element, regular spiking and chattering and each individual element can receive excitatory and/or inhibitory inputs and is subjected to different feedback types (excitatory and inhibitory; forward and recurrent). Both, deterministic and stochastic simulations are carried out to study the input-output relationships of these networks. Major results for regular spiking elements include frequency locking, spike rate amplification for strong synaptic coupling, and inhibition-induced spike rate control which can be interpreted as a output frequency rectification. For chattering elements, spike rate amplification for low frequencies and silencing for large frequencies is characteristic
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
This is an investigation on the development of a numerical assessment method for the hydrodynamic performance of an oscillating water column (OWC) wave energy converter. In the research work, a systematic study has been carried out on how the hydrodynamic problem can be solved and represented reliably, focusing on the phenomena of the interactions of the wave-structure and the wave-internal water surface. These phenomena are extensively examined numerically to show how the hydrodynamic parameters can be reliably obtained and used for the OWC performance assessment. In studying the dynamic system, a two-body system is used for the OWC wave energy converter. The first body is the device itself, and the second body is an imaginary “piston,” which replaces part of the water at the internal water surface in the water column. One advantage of the two-body system for an OWC wave energy converter is its physical representations, and therefore, the relevant mathematical expressions and the numerical simulation can be straightforward. That is, the main hydrodynamic parameters can be assessed using the boundary element method of the potential flow in frequency domain, and the relevant parameters are transformed directly from frequency domain to time domain for the two-body system. However, as it is shown in the research, an appropriate representation of the “imaginary” piston is very important, especially when the relevant parameters have to be transformed from frequency-domain to time domain for a further analysis. The examples given in the research have shown that the correct parameters transformed from frequency domain to time domain can be a vital factor for a successful numerical simulation.
Resumo:
This paper presents a study on the numerical simulation of the primary wave energy conversion in the oscillating water column (OWC) wave energy converters (WECs). The new proposed numerical approach consists of three major components: potential flow analysis for the conventional hydrodynamic parameters, such as added mass, damping coefficients, restoring force coefficients and wave excitations; the thermodynamic analysis of the air in the air chamber, which is under the assumptions of the given power take-off characteristics and an isentropic process of air flow. In the formulation, the air compressibility and its effects have been included; and a time-domain analysis by combining the linear potential flow and the thermodynamics of the air flow in the chamber, in which the hydrodynamics and thermodynamics/aerodynamics have been coupled together by the force generated by the pressurised and de-pressurised air in the air chamber, which in turn has effects on the motions of the structure and the internal water surface. As an example, the new developed approach has been applied to a fixed OWC device. The comparisons of the measured data and the simulation results show the new method is very capable of predicting the performance of the OWC devices.
Resumo:
Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.
Resumo:
When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.
Resumo:
An analysis of the operation of a new series-L/parallel-tuned Class-E amplifier and its equivalence to the classic shunt-C/series-tuned Class-E amplifier are presented. The first reported closed form design equations for the series-L/parallel-tuned topology operating under ideal switching conditions are given, including the switch current and voltage in steady state, the circuit component values, the peak values of switch current and voltage and the power-output capability. Theoretical analysis is confirmed by numerical simulation for a 500 mW (27 dBm), 10% bandwidth, 5 V series-L/parallel-tuned, then, shunt-C/series-tuned Class-E power amplifier, operating at 2.5 GHz. Excellent agreement between theory and simulation results is achieved.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
A conventional way to identify bridge frequencies is utilizing vibration data measured directly from the bridge. A drawback with this approach is that the deployment and maintenance of the vibration sensors are generally costly and time-consuming. One of the solutions is in a drive-by approach utilizing vehicle vibrations while the vehicle passes over the bridge. In this approach, however, the vehicle vibration includes the effect of road surface roughness, which makes it difficult to extract the bridge modal properties. This study aims to examine subtracting signals of two trailers towed by a vehicle to reduce the effect of road surface roughness. A simplified vehicle-bridge interaction model is used in the numerical simulation; the vehicle - trailer and bridge system are modeled as a coupled model. In addition, a laboratory experiment is carried out to verify results of the simulation and examine feasibility of the damage detection by the drive-by method.
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.
Resumo:
In this Letter we introduce a continuum model of neural tissue that include the effects of so-called spike frequency adaptation (SFA). The basic model is an integral equation for synaptic activity that depends upon the non-local network connectivity, synaptic response, and firing rate of a single neuron. A phenomenological model of SFA is examined whereby the firing rate is taken to be a simple state-dependent threshold function. As in the case without SFA classical Mexican-Hat connectivity is shown to allow for the existence of spatially localized states (bumps). Importantly an analysis of bump stability using recent Evans function techniques shows that bumps may undergo instabilities leading to the emergence of both breathers and traveling waves. Moreover, a similar analysis for traveling pulses leads to the conditions necessary to observe a stable traveling breather. Direct numerical simulations both confirm our theoretical predictions and illustrate the rich dynamic behavior of this model, including the appearance of self-replicating bumps.