983 resultados para n-dimensional MacLaurine series
Resumo:
In this paper we introduce a type of Hypercomplex Fourier Series based on Quaternions, and discuss on a Hypercomplex version of the Square of the Error Theorem. Since their discovery by Hamilton (Sinegre [1]), quaternions have provided beautifully insights either on the structure of different areas of Mathematics or in the connections of Mathematics with other fields. For instance: I) Pauli spin matrices used in Physics can be easily explained through quaternions analysis (Lan [2]); II) Fundamental theorem of Algebra (Eilenberg [3]), which asserts that the polynomial analysis in quaternions maps into itself the four dimensional sphere of all real quaternions, with the point infinity added, and the degree of this map is n. Motivated on earlier works by two of us on Power Series (Pendeza et al. [4]), and in a recent paper on Liouville’s Theorem (Borges and Mar˜o [5]), we obtain an Hypercomplex version of the Fourier Series, which hopefully can be used for the treatment of hypergeometric partial differential equations such as the dumped harmonic oscillation.
Resumo:
This paperaims to determine the velocity profile, in transient state, for a parallel incompressible flow known as Couette flow. The Navier-Stokes equations were applied upon this flow. Analytical solutions, based in Fourier series and integral transforms, were obtained for the one-dimensional transient Couette flow, taking into account constant and time-dependent pressure gradients acting on the fluid since the same instant when the plate starts it´s movement. Taking advantage of the orthogonality and superposition properties solutions were foundfor both considered cases. Considering a time-dependent pressure gradient, it was found a general solution for the Couette flow for a particular time function. It was found that the solution for a time-dependent pressure gradient includes the solutions for a zero pressure gradient and for a constant pressure gradient.
Resumo:
Aldolase has emerged as a promising molecular target for the treatment of human African trypanosomiasis. Over the last years, due to the increasing number of patients infected with Trypanosoma brucei, there is an urgent need for new drugs to treat this neglected disease. In the present study, two-dimensional fragment-based quantitative-structure activity relationship (QSAR) models were generated for a series of inhibitors of aldolase. Through the application of leave-one-out and leave-many-out cross-validation procedures, significant correlation coefficients were obtained (r(2) = 0.98 and q(2) = 0.77) as an indication of the statistical internal and external consistency of the models. The best model was employed to predict pK(i) values for a series of test set compounds, and the predicted values were in good agreement with the experimental results, showing the power of the model for untested compounds. Moreover, structure-based molecular modeling studies were performed to investigate the binding mode of the inhibitors in the active site of the parasitic target enzyme. The structural and QSAR results provided useful molecular information for the design of new aldolase inhibitors within this structural class.
Resumo:
Human African trypanosomiasis, also known as sleeping sickness, is a major cause of death in Africa, and for which there are no safe and effective treatments available. The enzyme aldolase from Trypanosoma brucei is an attractive, validated target for drug development. A series of alkyl‑glycolamido and alkyl-monoglycolate derivatives was studied employing a combination of drug design approaches. Three-dimensional quantitative structure-activity relationships (3D QSAR) models were generated using the comparative molecular field analysis (CoMFA). Significant results were obtained for the best QSAR model (r2 = 0.95, non-cross-validated correlation coefficient, and q2 = 0.80, cross-validated correlation coefficient), indicating its predictive ability for untested compounds. The model was then used to predict values of the dependent variables (pKi) of an external test set,the predicted values were in good agreement with the experimental results. The integration of 3D QSAR, molecular docking and molecular dynamics simulations provided further insight into the structural basis for selective inhibition of the target enzyme.
Resumo:
A series of oligo-phenylene dendronised conjugated polymers was prepared. The divergent synthetic approach adopted allowed for the facile synthesis of a range of dendronised monomers from a common intermediate, e.g. first and second generation fluorene. Only the polymerisation of the first generation and alkylarylamine substituted dendronised fluorene monomers yielded high molecular weight materials, attributed to the low solubility of the remaining dendronised monomers. The alkylarylamine substituted dendronised poly(fluorene) was incorporated into an organic light emitting diode (OLED) and exhibited an increased colour stability in air compared to other poly(fluorenes). The concept of dendronisation was extended to poly(fluorenone), a previously insoluble material. The synthesis of the first soluble poly(fluorenone) was achieved by the incorporation of oligo-phenylene dendrons at the 4-position of fluorenone. The dendronisation of fluorenone allowed for a polymer with an Mn of 4.1 x 104 gmol-1 to be prepared. Cyclic voltammetry of the dendronised poly(fluorenone) showed that the electron affinity of the polymer was high and that the polymer is a promising n-type material. A dimer and trimer of indenofluorene (IF) were prepared from the monobromo IF. These oligomers were investigated by 2-dimensional wide angle x-ray spectroscopy (2D-WAXS), polarised optical microscopy (POM) and dielectric spectroscopy, and found to form highly ordered smetic phases. By attaching perylene dye as the end-capper on the IF oligomers, molecules that exhibited efficient Förster energy transfer were obtained. Indenofluorene monoketone, a potential defect structure for IF based OLED’s, was synthesised. The synthesis of this model defect structure allowed for the long wavelength emission in OLED’s to be identified as ketone defects. The long wavelength emission from the indenofluorene monoketone was found to be concentration dependent, and suggests that aggregate formation is occurring. An IF linked hexa-peri-hexabenzocoronene (HBC) dimer was synthesised. The 2D-WAXS images of this HBC dimer demonstrate that the molecule exhibits intercolumnar organisation perpendicular to the extrusion direction. POM images of mixtures of the HBC dimer mixed with an HBC with a low isotropic temperature demonstrated that the HBC dimer is mixing with the isotropic HBC.
A Phase Space Box-counting based Method for Arrhythmia Prediction from Electrocardiogram Time Series
Resumo:
Arrhythmia is one kind of cardiovascular diseases that give rise to the number of deaths and potentially yields immedicable danger. Arrhythmia is a life threatening condition originating from disorganized propagation of electrical signals in heart resulting in desynchronization among different chambers of the heart. Fundamentally, the synchronization process means that the phase relationship of electrical activities between the chambers remains coherent, maintaining a constant phase difference over time. If desynchronization occurs due to arrhythmia, the coherent phase relationship breaks down resulting in chaotic rhythm affecting the regular pumping mechanism of heart. This phenomenon was explored by using the phase space reconstruction technique which is a standard analysis technique of time series data generated from nonlinear dynamical system. In this project a novel index is presented for predicting the onset of ventricular arrhythmias. Analysis of continuously captured long-term ECG data recordings was conducted up to the onset of arrhythmia by the phase space reconstruction method, obtaining 2-dimensional images, analysed by the box counting method. The method was tested using the ECG data set of three different kinds including normal (NR), Ventricular Tachycardia (VT), Ventricular Fibrillation (VF), extracted from the Physionet ECG database. Statistical measures like mean (μ), standard deviation (σ) and coefficient of variation (σ/μ) for the box-counting in phase space diagrams are derived for a sliding window of 10 beats of ECG signal. From the results of these statistical analyses, a threshold was derived as an upper bound of Coefficient of Variation (CV) for box-counting of ECG phase portraits which is capable of reliably predicting the impeding arrhythmia long before its actual occurrence. As future work of research, it was planned to validate this prediction tool over a wider population of patients affected by different kind of arrhythmia, like atrial fibrillation, bundle and brunch block, and set different thresholds for them, in order to confirm its clinical applicability.
Resumo:
This thesis reports on the realization, characterization and analysis of ultracold bosonic and fermionic atoms in three-dimensional optical lattice potentials. Ultracold quantum gases in optical lattices can be regarded as ideal model systems to investigate quantum many-body physics. In this work interacting ensembles of bosonic 87Rb and fermionic 40K atoms are employed to study equilibrium phases and nonequilibrium dynamics. The investigations are enabled by a versatile experimental setup, whose core feature is a blue-detuned optical lattice that is combined with Feshbach resonances and a red-detuned dipole trap to allow for independent control of tunneling, interactions and external confinement. The Fermi-Hubbard model, which plays a central role in the theoretical description of strongly correlated electrons, is experimentally realized by loading interacting fermionic spin mixtures into the optical lattice. Using phase-contrast imaging the in-situ size of the atomic density distribution is measured, which allows to extract the global compressibility of the many-body state as a function of interaction and external confinement. Thereby, metallic and insulating phases are clearly identified. At strongly repulsive interaction, a vanishing compressibility and suppression of doubly occupied lattice sites signal the emergence of a fermionic Mott insulator. In a second series of experiments interaction effects in bosonic lattice quantum gases are analyzed. Typically, interactions between microscopic particles are described as two-body interactions. As such they are also contained in the single-band Bose-Hubbard model. However, our measurements demonstrate the presence of multi-body interactions that effectively emerge via virtual transitions of atoms to higher lattice bands. These findings are enabled by the development of a novel atom optical measurement technique: In quantum phase revival spectroscopy periodic collapse and revival dynamics of the bosonic matter wave field are induced. The frequencies of the dynamics are directly related to the on-site interaction energies of atomic Fock states and can be read out with high precision. The third part of this work deals with mixtures of bosons and fermions in optical lattices, in which the interspecies interactions are accurately controlled by means of a Feshbach resonance. Studies of the equilibrium phases show that the bosonic superfluid to Mott insulator transition is shifted towards lower lattice depths when bosons and fermions interact attractively. This observation is further analyzed by applying quantum phase revival spectroscopy to few-body systems consisting of a single fermion and a coherent bosonic field on individual lattice sites. In addition to the direct measurement of Bose-Fermi interaction energies, Bose-Bose interactions are proven to be modified by the presence of a fermion. This renormalization of bosonic interaction energies can explain the shift of the Mott insulator transition. The experiments of this thesis lay important foundations for future studies of quantum magnetism with fermionic spin mixtures as well as for the realization of complex quantum phases with Bose-Fermi mixtures. They furthermore point towards physics that reaches beyond the single-band Hubbard model.
Resumo:
Despite the fact that photographic stimuli are used across experimental contexts with both human and nonhuman subjects, the nature of individuals' perceptions of these stimuli is still not well understood. In the present experiments, we tested whether three orangutans and 36 human children could use photographic information presented on a computer screen to solve a perceptually corresponding problem in the physical domain. Furthermore, we tested the cues that aided in this process by pitting featural information against spatial position in a series of probe trials. We found that many of the children and one orangutan were successfully able to use the information cross-dimensionally; however, the other two orangutans and almost a quarter of the children failed to acquire the task. Species differences emerged with respect to ease of task acquisition. More striking, however, were the differences in cues that participants used to solve the task: Whereas the orangutan used a spatial strategy, the majority of children used a feature one. Possible reasons for these differences are discussed from both evolutionary and developmental perspectives. The novel results found here underscore the need for further testing in this area to design appropriate experimental paradigms in future comparative research settings.
Resumo:
With recent advances in mass spectrometry techniques, it is now possible to investigate proteins over a wide range of molecular weights in small biological specimens. This advance has generated data-analytic challenges in proteomics, similar to those created by microarray technologies in genetics, namely, discovery of "signature" protein profiles specific to each pathologic state (e.g., normal vs. cancer) or differential profiles between experimental conditions (e.g., treated by a drug of interest vs. untreated) from high-dimensional data. We propose a data analytic strategy for discovering protein biomarkers based on such high-dimensional mass-spectrometry data. A real biomarker-discovery project on prostate cancer is taken as a concrete example throughout the paper: the project aims to identify proteins in serum that distinguish cancer, benign hyperplasia, and normal states of prostate using the Surface Enhanced Laser Desorption/Ionization (SELDI) technology, a recently developed mass spectrometry technique. Our data analytic strategy takes properties of the SELDI mass-spectrometer into account: the SELDI output of a specimen contains about 48,000 (x, y) points where x is the protein mass divided by the number of charges introduced by ionization and y is the protein intensity of the corresponding mass per charge value, x, in that specimen. Given high coefficients of variation and other characteristics of protein intensity measures (y values), we reduce the measures of protein intensities to a set of binary variables that indicate peaks in the y-axis direction in the nearest neighborhoods of each mass per charge point in the x-axis direction. We then account for a shifting (measurement error) problem of the x-axis in SELDI output. After these pre-analysis processing of data, we combine the binary predictors to generate classification rules for cancer, benign hyperplasia, and normal states of prostate. Our approach is to apply the boosting algorithm to select binary predictors and construct a summary classifier. We empirically evaluate sensitivity and specificity of the resulting summary classifiers with a test dataset that is independent from the training dataset used to construct the summary classifiers. The proposed method performed nearly perfectly in distinguishing cancer and benign hyperplasia from normal. In the classification of cancer vs. benign hyperplasia, however, an appreciable proportion of the benign specimens were classified incorrectly as cancer. We discuss practical issues associated with our proposed approach to the analysis of SELDI output and its application in cancer biomarker discovery.
Resumo:
In biostatistical applications, interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time, then the data is described by the well known singly-censored current status model, also known as interval censored data, case I. We extend this current status model by allowing the presence of a time-dependent process, which is partly observed and allowing C to depend on T through the observed part of this time-dependent process. Because of the high dimension of the covariate process, no globally efficient estimators exist with a good practical performance at moderate sample sizes. We follow the approach of Robins and Rotnitzky (1992) by modeling the censoring variable, given the time-variable and the covariate-process, i.e., the missingness process, under the restriction that it satisfied coarsening at random. We propose a generalization of the simple current status estimator of the distribution of T and of smooth functionals of the distribution of T, which is based on an estimate of the missingness. In this estimator the covariates enter only through the estimate of the missingness process. Due to the coarsening at random assumption, the estimator has the interesting property that if we estimate the missingness process more nonparametrically, then we improve its efficiency. We show that by local estimation of an optimal model or optimal function of the covariates for the missingness process, the generalized current status estimator for smooth functionals become locally efficient; meaning it is efficient if the right model or covariate is consistently estimated and it is consistent and asymptotically normal in general. Estimation of the optimal model requires estimation of the conditional distribution of T, given the covariates. Any (prior) knowledge of this conditional distribution can be used at this stage without any risk of losing root-n consistency. We also propose locally efficient one step estimators. Finally, we show some simulation results.
Resumo:
Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time (AFT) model. Assessment of the two methods is conducted through simulation studies and through analysis of microarray data obtained from a set of patients with diffuse large B-cell lymphoma where time to survival is of interest. The approaches are shown to match or exceed the predictive performance of a Cox-based and an AFT-based variable selection method. The methods are moreover shown to be much more computationally efficient than their respective Cox- and AFT- based counterparts.