967 resultados para Data matrix
Resumo:
In this work we explore the validity of employing a modified version of the nonrelativistic structure code civ3 for heavy, highly charged systems, using Na-like tungsten as a simple benchmark. Consequently, we present radiative and subsequent collisional atomic data compared with corresponding results from a fully relativistic structure and collisional model. Our motivation for this line of study is to benchmark civ3 against the relativistic grasp0 structure code. This is an important study as civ3 wave functions in nonrelativistic R-matrix calculations are computationally less expensive than their Dirac counterparts. There are very few existing data for the W LXIV ion in the literature with which we can compare except for an incomplete set of energy levels available from the NIST database. The overall accuracy of the present results is thus determined by the comparison between the civ3 and grasp0 structure codes alongside collisional atomic data computed by the R-matrix Breit-Pauli and Dirac codes. It is found that the electron-impact collision strengths and effective collision strengths computed by these differing methods are in good general agreement for the majority of the transitions considered, across a broad range of electron temperatures.
Resumo:
Modelling of massive stars and supernovae (SNe) plays a crucial role in understanding galaxies. From this modelling we can derive fundamental constraints on stellar evolution, mass-loss processes, mixing, and the products of nucleosynthesis. Proper account must be taken of all important processes that populate and depopulate the levels (collisional excitation, de-excitation, ionization, recombination, photoionization, bound–bound processes). For the analysis of Type Ia SNe and core collapse SNe (Types Ib, Ic and II) Fe group elements are particularly important. Unfortunately little data is currently available and most noticeably absent are the photoionization cross-sections for the Fe-peaks which have high abundances in SNe. Important interactions for both photoionization and electron-impact excitation are calculated using the relativistic Dirac atomic R-matrix codes (DARC) for low-ionization stages of Cobalt. All results are calculated up to photon energies of 45 eV and electron energies up to 20 eV. The wavefunction representation of Co III has been generated using GRASP0 by including the dominant 3d7, 3d6[4s, 4p], 3p43d9 and 3p63d9 configurations, resulting in 292 fine structure levels. Electron-impact collision strengths and Maxwellian averaged effective collision strengths across a wide range of astrophysically relevant temperatures are computed for Co III. In addition, statistically weighted level-resolved ground and metastable photoionization cross-sections are presented for Co II and compared directly with existing work.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Specific domains can determine protein structural functional relationships. For the Alzheimer’s Amyloid Precursor Protein (APP) several domains have been described, both in its intracellular and extracellular fragments. Many functions have been attributed to APP including an important role in cell adhesion and cell to cell recognition. This places APP at key biological responses, including synaptic transmission. To fulfil these functions, extracellular domains take on added significance. The APP extracellular domain RERMS is in fact a likely candidate to be involved in the aforementioned physiological processes. A multidisciplinary approach was employed to address the role of RERMS. The peptide RERMS was crosslinked to PEG (Polyethylene glycol) and the reaction validated by FTIR (Fourier transform infrared spectrometry). FTIR proved to be the most efficient at validating this reaction because it requires only a drop of sample, and it gives information about the reactions occurred in a mixture. The data obtained consist in an infrared spectra of the sample, where peaks positions give information about the structure of the molecules, and the intensity of peaks is related to the concentration of the molecules. Subsequently substrates of PEG impregnated with RERMS were prepared and SH-SY5Y (human neuroblastoma cell line) cells were plated and differentiated on the latter. Several morphological alterations were clearly evident. The RERMS peptide provoked cells to take on a flatter appearance and the cytoskeletal architecture changed, with the appearance of stress fibres, a clear indicator of actin reorganization. Given that focal adhesions play a key role in determining cellular structure the latter were directly investigated. Focal adhesion kinase (FAK) is one of the most highly expressed proteins in the CNS (central nervous system) during development. It has been described to be crucial for radial migration of neurons. FAK can be localized in growth cones and mediated the response to attractive and repulsive cues during migration. One of the mechanisms by which FAK becomes active is by auto phosphorylation at tyrosine 397. It became clearly evident that in the presence of the RERMS peptide pFAK staining at focal adhesions intensified and more focal adhesions became apparent. Furthermore speckled structures in the nucleus, putatively corresponding to increased expression activity, also increased with RERMS. Taken together these results indicate that the RERMS domain in APP plays a critical role in determining cellular physiological responses. Here is suggested a model by which RERMS domain is recognized by integrins and mediate intracellular responses involving FAK, talin, actin filaments and vinculin. This mechanism probably is responsible for mediating cell adhesion and neurite outgrowth on neurons.
Resumo:
Tese de dout. em Química, Faculdade de Ciências do Mar e do Ambiente, Univ. do Algarve, 2002
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^
Resumo:
Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp, 2016. © 2016 Wiley Periodicals, Inc.
Resumo:
Model misspecification affects the classical test statistics used to assess the fit of the Item Response Theory (IRT) models. Robust tests have been derived under model misspecification, as the Generalized Lagrange Multiplier and Hausman tests, but their use has not been largely explored in the IRT framework. In the first part of the thesis, we introduce the Generalized Lagrange Multiplier test to detect differential item response functioning in IRT models for binary data under model misspecification. By means of a simulation study and a real data analysis, we compare its performance with the classical Lagrange Multiplier test, computed using the Hessian and the cross-product matrix, and the Generalized Jackknife Score test. The power of these tests is computed empirically and asymptotically. The misspecifications considered are local dependence among items and non-normal distribution of the latent variable. The results highlight that, under mild model misspecification, all tests have good performance while, under strong model misspecification, the performance of the tests deteriorates. None of the tests considered show an overall superior performance than the others. In the second part of the thesis, we extend the Generalized Hausman test to detect non-normality of the latent variable distribution. To build the test, we consider a seminonparametric-IRT model, that assumes a more flexible latent variable distribution. By means of a simulation study and two real applications, we compare the performance of the Generalized Hausman test with the M2 limited information goodness-of-fit test and the Likelihood-Ratio test. Additionally, the information criteria are computed. The Generalized Hausman test has a better performance than the Likelihood-Ratio test in terms of Type I error rates and the M2 test in terms of power. The performance of the Generalized Hausman test and the information criteria deteriorates when the sample size is small and with a few items.
Resumo:
The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
Subjects with spinal cord injury (SCI) exhibit impaired left ventricular (LV) diastolic function, which has been reported to be attenuated by regular physical activity. This study investigated the relationship between circulating matrix metalloproteinases (MMPs) and tissue inhibitors of MMPs (TIMPs) and echocardiographic parameters in SCI subjects and the role of physical activity in this regard. Forty-two men with SCI [19 sedentary (S-SCI) and 23 physically-active (PA-SCI)] were evaluated by clinical, anthropometric, laboratory, and echocardiographic analysis. Plasmatic pro-MMP-2, MMP-2, MMP-8, pro-MMP-9, MMP-9, TIMP-1 and TIMP-2 levels were determined by enzyme-linked immunosorbent assay and zymography. PA-SCI subjects presented lower pro-MMP-2 and pro-MMP-2/TIMP-2 levels and improved markers of LV diastolic function (lower E/Em and higher Em and E/A values) than S-SCI ones. Bivariate analysis showed that pro-MMP-2 correlated inversely with Em and directly with E/Em, while MMP-9 correlated directly with LV mass index and LV end-diastolic diameter in the whole sample. Following multiple regression analysis, pro-MMP-2, but not physical activity, remained associated with Em, while MMP-9 was associated with LV mass index in the whole sample. These findings suggest differing roles for MMPs in LV structure and function regulation and an interaction among pro-MMP-2, diastolic function and physical activity in SCI subjects.
Resumo:
Corynebacterium species (spp.) are among the most frequently isolated pathogens associated with subclinical mastitis in dairy cows. However, simple, fast, and reliable methods for the identification of species of the genus Corynebacterium are not currently available. This study aimed to evaluate the usefulness of matrix-assisted laser desorption ionization/mass spectrometry (MALDI-TOF MS) for identifying Corynebacterium spp. isolated from the mammary glands of dairy cows. Corynebacterium spp. were isolated from milk samples via microbiological culture (n=180) and were analyzed by MALDI-TOF MS and 16S rRNA gene sequencing. Using MALDI-TOF MS methodology, 161 Corynebacterium spp. isolates (89.4%) were correctly identified at the species level, whereas 12 isolates (6.7%) were identified at the genus level. Most isolates that were identified at the species level with 16 S rRNA gene sequencing were identified as Corynebacterium bovis (n=156; 86.7%) were also identified as C. bovis with MALDI-TOF MS. Five Corynebacterium spp. isolates (2.8%) were not correctly identified at the species level with MALDI-TOF MS and 2 isolates (1.1%) were considered unidentified because despite having MALDI-TOF MS scores >2, only the genus level was correctly identified. Therefore, MALDI-TOF MS could serve as an alternative method for species-level diagnoses of bovine intramammary infections caused by Corynebacterium spp.
Resumo:
The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.