978 resultados para text message analysis and question-answering system
Resumo:
Reduced organic sulfur (ROS) compounds are environmentally ubiquitous and play an important role in sulfur cycling as well as in biogeochemical cycles of toxic metals, in particular mercury. Development of effective methods for analysis of ROS in environmental samples and investigations on the interactions of ROS with mercury are critical for understanding the role of ROS in mercury cycling, yet both of which are poorly studied. Covalent affinity chromatography-based methods were attempted for analysis of ROS in environmental water samples. A method was developed for analysis of environmental thiols, by preconcentration using affinity covalent chromatographic column or solid phase extraction, followed by releasing of thiols from the thiopropyl sepharose gel using TCEP and analysis using HPLC-UV or HPLC-FL. Under the optimized conditions, the detection limits of the method using HPLC-FL detection were 0.45 and 0.36 nM for Cys and GSH, respectively. Our results suggest that covalent affinity methods are efficient for thiol enrichment and interference elimination, demonstrating their promising applications in developing a sensitive, reliable, and useful technique for thiol analysis in environmental water samples. The dissolution of mercury sulfide (HgS) in the presence of ROS and dissolved organic matter (DOM) was investigated, by quantifying the effects of ROS on HgS dissolution and determining the speciation of the mercury released from ROS-induced HgS dissolution. It was observed that the presence of small ROS (e.g., Cys and GSH) and large molecule DOM, in particular at high concentrations, could significantly enhance the dissolution of HgS. The dissolved Hg during HgS dissolution determined using the conventional 0.22 μm cutoff method could include colloidal Hg (e.g., HgS colloids) and truly dissolved Hg (e.g., Hg-ROS complexes). A centrifugal filtration method (with 3 kDa MWCO) was employed to characterize the speciation and reactivity of the Hg released during ROS-enhanced HgS dissolution. The presence of small ROS could produce a considerable fraction (about 40% of total mercury in the solution) of truly dissolved mercury (< 3 kDa), probably due to the formation of Hg-Cys or Hg-GSH complexes. The truly dissolved Hg formed during GSH- or Cys-enhanced HgS dissolution was directly reducible (100% for GSH and 40% for Cys) by stannous chloride, demonstrating its potential role in Hg transformation and bioaccumulation.
Resumo:
Cotton is the most abundant natural fiber in the world. Many countries are involved in the growing, importation, exportation and production of this commodity. Paper documentation claiming geographic origin is the current method employed at U.S. ports for identifying cotton sources and enforcing tariffs. Because customs documentation can be easily falsified, it is necessary to develop a robust method for authenticating or refuting the source of the cotton commodities. This work presents, for the first time, a comprehensive approach to the chemical characterization of unprocessed cotton in order to provide an independent tool to establish geographic origin. Elemental and stable isotope ratio analysis of unprocessed cotton provides a means to increase the ability to distinguish cotton in addition to any physical and morphological examinations that could be, and are currently performed. Elemental analysis has been conducted using LA-ICP-MS, LA-ICP-OES and LIBS in order to offer a direct comparison of the analytical performance of each technique and determine the utility of each technique for this purpose. Multivariate predictive modeling approaches are used to determine the potential of elemental and stable isotopic information to aide in the geographic provenancing of unprocessed cotton of both domestic and foreign origin. These approaches assess the stability of the profiles to temporal and spatial variation to determine the feasibility of this application. This dissertation also evaluates plasma conditions and ablation processes so as to improve the quality of analytical measurements made using atomic emission spectroscopy techniques. These interactions, in LIBS particularly, are assessed to determine any potential simplification of the instrumental design and method development phases. This is accomplished through the analysis of several matrices representing different physical substrates to determine the potential of adopting universal LIBS parameters for 532 nm and 1064 nm LIBS for some important operating parameters. A novel approach to evaluate both ablation processes and plasma conditions using a single measurement was developed and utilized to determine the "useful ablation efficiency" for different materials. The work presented here demonstrates the potential for an a priori prediction of some probable laser parameters important in analytical LIBS measurement.
Resumo:
The elemental analysis of soil is useful in forensic and environmental sciences. Methods were developed and optimized for two laser-based multi-element analysis techniques: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS). This work represents the first use of a 266 nm laser for forensic soil analysis by LIBS. Sample preparation methods were developed and optimized for a variety of sample types, including pellets for large bulk soil specimens (470 mg) and sediment-laden filters (47 mg), and tape-mounting for small transfer evidence specimens (10 mg). Analytical performance for sediment filter pellets and tape-mounted soils was similar to that achieved with bulk pellets. An inter-laboratory comparison exercise was designed to evaluate the performance of the LA-ICP-MS and LIBS methods, as well as for micro X-ray fluorescence (μXRF), across multiple laboratories. Limits of detection (LODs) were 0.01-23 ppm for LA-ICP-MS, 0.25-574 ppm for LIBS, 16-4400 ppm for μXRF, and well below the levels normally seen in soils. Good intra-laboratory precision (≤ 6 % relative standard deviation (RSD) for LA-ICP-MS; ≤ 8 % for μXRF; ≤ 17 % for LIBS) and inter-laboratory precision (≤ 19 % for LA-ICP-MS; ≤ 25 % for μXRF) were achieved for most elements, which is encouraging for a first inter-laboratory exercise. While LIBS generally has higher LODs and RSDs than LA-ICP-MS, both were capable of generating good quality multi-element data sufficient for discrimination purposes. Multivariate methods using principal components analysis (PCA) and linear discriminant analysis (LDA) were developed for discriminations of soils from different sources. Specimens from different sites that were indistinguishable by color alone were discriminated by elemental analysis. Correct classification rates of 94.5 % or better were achieved in a simulated forensic discrimination of three similar sites for both LIBS and LA-ICP-MS. Results for tape-mounted specimens were nearly identical to those achieved with pellets. Methods were tested on soils from USA, Canada and Tanzania. Within-site heterogeneity was site-specific. Elemental differences were greatest for specimens separated by large distances, even within the same lithology. Elemental profiles can be used to discriminate soils from different locations and narrow down locations even when mineralogy is similar.
Resumo:
A debate is currently prevalent among the structural engineers regarding the use of cracked versus un-cracked moment of inertia of the structural elements in analyzing and designing tall concrete buildings. (The basic definition of a tall building, according to the Journal of Structural Design of Tall Buildings Vol. 13. No. 5, 2004 is a structure that is equal to or greater than 160 feet in height, or 6 stories or greater.) The controversy is the result of differing interpretations of certain ACI (American Concrete Institute) code provisions. The issue is whether designers should use cracked moment of inertia in order to estimate lateral deflection and whether the computed lateral deflection should be used to carry out subsequent second-order analysis (analysis considering the effect of first order lateral deflections on bending moment and shear stresses). On one hand, bending moments and shear forces estimated based on un-cracked moment of inertia of the sections may result in conservative designs by overestimating moments and shears. On the other hand, lateral deflections may be underestimated due to the same analyses resulting in unsafe designs.
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: 1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; 2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and 3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
The purpose of this study was to analyze the network performance by observing the effect of varying network size and data link rate on one of the most commonly found network configurations. Computer networks have been growing explosively. Networking is used in every aspect of business, including advertising, production, shipping, planning, billing, and accounting. Communication takes place through networks that form the basis of transfer of information. The number and type of components may vary from network to network depending on several factors such as requirement and actual physical placement of the networks. There is no fixed size of the networks and they can be very small consisting of say five to six nodes or very large consisting of over two thousand nodes. The varying network sizes make it very important to study the network performance so as to be able to predict the functioning and the suitability of the network. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected significantly due to the increase in the size of the network and that there exists a correlation between the various parameters and the size of the network. These variations are not only dependent on the magnitude of the change in the actual physical area of the network but also on the data link rate used to connect the various components of the network.
Resumo:
Accounting students become practitioners facing ethical decision-making challenges that can be subject to various interpretations; hence, the profession is concerned with the appropriateness of their decisions. Moral development of these students has implications for a profession under legal challenges, negative publicity, and government scrutiny. Accounting students moral development has been studied by examining their responses to moral questions in Rest's Defining Issues Test (DIT), their professional attitudes on Hall's Professionalism Scale Dimensions, and their ethical orientation-based professional commitment and ethical sensitivity. This study extended research in accounting ethics and moral development by examining students in a college where an ethics course is a requirement for graduation. Knowledge of differences in the moral development of accounting students may alert practitioners and educators to potential problems resulting from a lack of ethical understanding as measured by moral development levels. If student moral development levels differ by major, and accounting majors have lower levels than other students, the conclusion may be that this difference is a causative factor for the alleged acts of malfeasance in the profession that may result in malpractice suits. The current study compared 205 accounting, business, and nonbusiness students from a private university. In addition to academic major and completion of an ethics course, the other independent variable was academic level. Gender and age were tested as control variables and Rest's DIT score was the dependent variable. The primary analysis was a 2x3x3 ANOVA with post hoc tests for results with significant p-value of less than 0.05. The results of this study reveal that students who take an ethics course appear to have a higher level of moral development (p=0.013), as measured by the (DIT), than students at the same academic level who have not taken an ethics course. In addition, a statistically significant difference (p=0.034) exists between freshmen who took an ethics class and juniors who did not take an ethics class. For every analysis except one, the lower class year with an ethics class had a higher level of moral development than the higher class year without an ethics class. These results appear to show that ethics education in particular has a greater effect on the level of moral development than education in general. Findings based on the gender specific analyses appear to show that males and females respond differently to the effects of taking an ethics class. The male students do not appear to increase their moral development level after taking an ethics course (p=0.693) but male levels of moral development differ significantly (p=0.003) by major. Female levels of moral development appear to increase after taking an ethics course (p=0.002). However, they do not differ according to major (p=0.0 97). These findings indicate that accounting students should be required to have a class in ethics as part of their college curriculum. Students with an ethics class have a significantly higher level of moral development. The challenges facing the profession at the current time indicate that public confidence in the reports of client corporations has eroded and one way to restore this confidence could be to require ethics training of future accountants.
Resumo:
The presence of harmful algal blooms (HAB) is a growing concern in aquatic environments. Among HAB organisms, cyanobacteria are of special concern because they have been reported worldwide to cause environmental and human health problem through contamination of drinking water. Although several analytical approaches have been applied to monitoring cyanobacteria toxins, conventional methods are costly and time-consuming so that analyses take weeks for field sampling and subsequent lab analysis. Capillary electrophoresis (CE) becomes a particularly suitable analytical separation method that can couple very small samples and rapid separations to a wide range of selective and sensitive detection techniques. This paper demonstrates a method for rapid separation and identification of four microcystin variants commonly found in aquatic environments. CE coupled to UV and electrospray ionization time-of-flight mass spectrometry (ESI-TOF) procedures were developed. All four analytes were separated within 6 minutes. The ESI-TOF experiment provides accurate molecular information, which further identifies analytes.
Resumo:
Methane hydrate is an ice-like substance that is stable at high-pressure and low temperature in continental margin sediments. Since the discovery of a large number of gas flares at the landward termination of the gas hydrate stability zone off Svalbard, there has been concern that warming bottom waters have started to dissociate large amounts of gas hydrate and that the resulting methane release may possibly accelerate global warming. Here, we can corroborate that hydrates play a role in the observed seepage of gas, but we present evidence that seepage off Svalbard has been ongoing for at least three thousand years and that seasonal fluctuations of 1-2°C in the bottom-water temperature cause periodic gas hydrate formation and dissociation, which focus seepage at the observed sites.
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.
Resumo:
In this paper, we consider the uplink of a single-cell massive multiple-input multiple-output (MIMO) system with inphase and quadrature-phase imbalance (IQI). This scenario is of particular importance in massive MIMO systems, where the deployment of lower-cost, lower-quality components is desirable to make massive MIMO a viable technology. Particularly, we investigate the effect of IQI on the performance of massive MIMO employing maximum-ratio combining (MRC) receivers. In order to study how IQI affects channel estimation, we derive a new channel estimator for the IQI-impaired model and show that IQI can substantially downgrade the performance of MRC receivers. Moreover, a low-complexity IQI compensation scheme, suitable for massive MIMO, is proposed which is based on the IQI coefficients' estimation and it is independent of the channel gain. The performance of the proposed compensation scheme is analytically evaluated by deriving a tractable approximation of the ergodic achievable rate and providing the asymptotic power scaling laws assuming transmission over Rayleigh fading channels with log-normal large-scale fading. Finally, we show that massive MIMO effectively suppresses the residual IQI effects, as long as, the compensation scheme is applied.