894 resultados para Computational Mechanics, Numerical Analysis, Meshfree Method, Meshless Method, Time Dependent, MEMS
Resumo:
Liquid films, evaporating or non-evaporating, are ubiquitous in nature and technology. The dynamics of evaporating liquid films is a study applicable in several industries such as water recovery, heat exchangers, crystal growth, drug design etc. The theory describing the dynamics of liquid films crosses several fields such as engineering, mathematics, material science, biophysics and volcanology to name a few. Interfacial instabilities typically manifest by the undulation of an interface from a presumed flat state or by the onset of a secondary flow state from a primary quiescent state or both. To study the instabilities affecting liquid films, an evaporating/non-evaporating Newtonian liquid film is subject to a perturbation. Numerical analysis is conducted on configurations of such liquid films being heated on solid surfaces in order to examine the various stabilizing and destabilizing mechanisms that can cause the formation of different convective structures. These convective structures have implications towards heat transfer that occurs via this process. Certain aspects of this research topic have not received attention, as will be obvious from the literature review. Static, horizontal liquid films on solid surfaces are examined for their resistance to long wave type instabilities via linear stability analysis, method of normal modes and finite difference methods. The spatiotemporal evolution equation, available in literature, describing the time evolution of a liquid film heated on a solid surface, is utilized to analyze various stabilizing/destabilizing mechanisms affecting evaporating and non-evaporating liquid films. The impact of these mechanisms on the film stability and structure for both buoyant and non-buoyant films will be examined by the variation of mechanical and thermal boundary conditions. Films evaporating in zero gravity are studied using the evolution equation. It is found that films that are stable to long wave type instabilities in terrestrial gravity are prone to destabilization via long wave instabilities in zero gravity.
Resumo:
If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.
Resumo:
In this article, we develop the a priori and a posteriori error analysis of hp-version interior penalty discontinuous Galerkin finite element methods for strongly monotone quasi-Newtonian fluid flows in a bounded Lipschitz domain d, d = 2, 3. In the latter case, computable upper and lower bounds on the error are derived in terms of a natural energy norm, which are explicit in the local mesh size and local polynomial degree of the approximating finite element method. A series of numerical experiments illustrate the performance of the proposed a posteriori error indicators within an automatic hp-adaptive refinement algorithm.
Resumo:
OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG=0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.
Resumo:
OBJECTIVE Caesarean section (CS) rates have risen over the past two decades. The aim of this observational study was to identify time-dependent variations in CS and vaginal delivery rates over a period of 11 years. METHOD All deliveries (13,701 deliveries during the period 1999-2009) at the University Women's Hospital Bern were analysed using an internationally standardised and approved ten-group classification system. Caesarean sections on maternal request (CSMR) were evaluated separately. RESULTS We detected an overall CS rate of 36.63% and an increase in the CS rate over time (p <0.001). Low-risk profile groups were the two largest populations and displayed low CS rates, with significantly decreasing relative size over time. The relative size of groups with induced labour increased significantly, but this did not have an impact on the overall CS rate. Pregnancies complicated by breech position, multiple pregnancies and abnormal lies did not have an impact on overall CS rate. The biggest contributor to a high CS rate was preterm delivery and the existence of a uterine scar from a previous CS. CSMR was 1.45% and did not have an impact on the overall CS rate. CONCLUSION The observational study identified wide variations in caesarean section and vaginal delivery rates across the groups over time, and a shift towards high-risk populations was noted. The biggest contributors to high CS rates were identified; namely, previous uterine scar and preterm delivery. Interventions aiming to reduce CS rates are planned.
Resumo:
The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.
Resumo:
We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the FeynmanHellman Theorem (resp. the WardTakahashi identity). To show the WardTakahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.
Resumo:
Pathway based genome wide association study evolves from pathway analysis for microarray gene expression and is under rapid development as a complementary for single-SNP based genome wide association study. However, it faces new challenges, such as the summarization of SNP statistics to pathway statistics. The current study applies the ridge regularized Kernel Sliced Inverse Regression (KSIR) to achieve dimension reduction and compared this method to the other two widely used methods, the minimal-p-value (minP) approach of assigning the best test statistics of all SNPs in each pathway as the statistics of the pathway and the principal component analysis (PCA) method of utilizing PCA to calculate the principal components of each pathway. Comparison of the three methods using simulated datasets consisting of 500 cases, 500 controls and100 SNPs demonstrated that KSIR method outperformed the other two methods in terms of causal pathway ranking and the statistical power. PCA method showed similar performance as the minP method. KSIR method also showed a better performance over the other two methods in analyzing a real dataset, the WTCCC Ulcerative Colitis dataset consisting of 1762 cases, 3773 controls as the discovery cohort and 591 cases, 1639 controls as the replication cohort. Several immune and non-immune pathways relevant to ulcerative colitis were identified by these methods. Results from the current study provided a reference for further methodology development and identified novel pathways that may be of importance to the development of ulcerative colitis.^
Resumo:
With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations
Resumo:
A local proper orthogonal decomposition (POD) plus Galerkin projection method was recently developed to accelerate time dependent numerical solvers of PDEs. This method is based on the combined use of a numerical code (NC) and a Galerkin sys- tem (GS) in a sequence of interspersed time intervals, INC and IGS, respectively. POD is performed on some sets of snapshots calculated by the numerical solver in the INC inter- vals. The governing equations are Galerkin projected onto the most energetic POD modes and the resulting GS is time integrated in the next IGS interval. The major computa- tional eort is associated with the snapshots calculation in the rst INC interval, where the POD manifold needs to be completely constructed (it is only updated in subsequent INC intervals, which can thus be quite small). As the POD manifold depends only weakly on the particular values of the parameters of the problem, a suitable library can be con- structed adapting the snapshots calculated in other runs to drastically reduce the size of the rst INC interval and thus the involved computational cost. The strategy is success- fully tested in (i) the one-dimensional complex Ginzburg-Landau equation, including the case in which it exhibits transient chaos, and (ii) the two-dimensional unsteady lid-driven cavity problem
Resumo:
There are many situations where input feature vectors are incomplete and methods to tackle the problem have been studied for a long time. A commonly used procedure is to replace each missing value with an imputation. This paper presents a method to perform categorical missing data imputation from numerical and categorical variables. The imputations are based on Simpsons fuzzy min-max neural networks where the input variables for learning and classification are just numerical. The proposed method extends the input to categorical variables by introducing new fuzzy sets, a new operation and a new architecture. The procedure is tested and compared with others using opinion poll data.
Resumo:
When non linear physical systems of infinite extent are modelled, such as tunnels and perforations, it is necessary to simulate suitably the solution in the infinite as well as the non linearity. The finite element method (FEM) is a well known procedure for simulating the non linear behavior. However, the treatment of the infinite field with domain truncations is often questionable. On the other hand, the boundary element method (BEM) is suitable to simulate the infinite behavior without truncations. Because of this, by the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM-BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.
Resumo:
Several methods to improve multiple distant microphone (MDM) speaker diarization based on Time Delay of Arrival (TDOA) features are evaluated in this paper. All of them avoid the use of a single reference channel to calculate the TDOA values and, based on different criteria, select among all possible pairs of microphones a set of pairs that will be used to estimate the TDOA's. The evaluated methods have been named the "Dynamic Margin" (DM), the "Extreme Regions" (ER), the "Most Common" (MC), the "Cross Correlation" (XCorr) and the "Principle Component Analysis" (PCA). It is shown that all methods improve the baseline results for the development set and four of them improve also the results for the evaluation set. Improvements of 3.49% and 10.77% DER relative are obtained for DM and ER respectively for the test set. The XCorr and PCA methods achieve an improvement of 36.72% and 30.82% DER relative for the test set. Moreover, the computational cost for the XCorr method is 20% less than the baseline.
Resumo:
El objetivo principal del presente proyecto es proporcionar al ingeniero de telecomunicaciones una visin general de las tcnicas que se utilizan en el modelado del sistema auditivo. El modelado del sistema auditivo se realiza con los siguientes objetivos: a) Interpretar medidas directas, b)unificar el entendimiento de diferentes fenmenos, c) guiar estrategias de amplificacin para suplir prdidas auditivas y d) tener predicciones experimentalmente comprobables de comportamientos, con diferentes niveles de complejidad. En este trabajo se tratarn y explicarn brevemente las diferentes tcnicas utilizadas para modelar las partes del sistema auditivo, desde las analogas electroacsticas, modelos biofsicos, binaurales, hasta la implementacin de filtros auditivos mediante procesado de seal. Podemos concluir que el modelado mediante analogas electroacsticas permite una rpida implementacin y entendimiento, pero tiene ciertas limitaciones. Las simulaciones mediante anlisis numricos son precisas y de gran utilidad tanto para del odo medio como para el interno. El procesado de seal es el procedimiento ms completo y utilizado ya que permite modelar odo externo y medio adems de permitir la implementacin de filtros cocleares muy precisos y coherentes con la realidad incluyndolos en modelos perceptivos. ABSTRACT. The main aim of the Project is to provide the Telecommunications Engineer an overview about the approaches for modelling the auditory system. The auditory system modelling is done for the next objectives: a) Interpret direct measures, b) Understand different phenomena c) get strategies of amplification for hearing impaired people and d) Obtain testable predictions experimentally about some behaviors with different complexity levels. Inside this document, several approaches about modeling of the auditory system parts will be explained: analog circuits, biophysics models, binaural models, and auditory filters made through signal processing. In conclusion, analog circuits are made quickly and they are easier to understand but they have many limitations. Simulations through numerical analysis are accurate and useful in middle and inner ear models. Signal processing is the more versatile approach because it lets to make a model of external and middle ear and then it allows to make complex auditory filters. Perceptive models can be made entirely through this method.