933 resultados para Computer Generated Proofs
Resumo:
We extended genetic linkage analysis - an analysis widely used in quantitative genetics - to 3D images to analyze single gene effects on brain fiber architecture. We collected 4 Tesla diffusion tensor images (DTI) and genotype data from 258 healthy adult twins and their non-twin siblings. After high-dimensional fluid registration, at each voxel we estimated the genetic linkage between the single nucleotide polymorphism (SNP), Val66Met (dbSNP number rs6265), of the BDNF gene (brain-derived neurotrophic factor) with fractional anisotropy (FA) derived from each subject's DTI scan, by fitting structural equation models (SEM) from quantitative genetics. We also examined how image filtering affects the effect sizes for genetic linkage by examining how the overall significance of voxelwise effects varied with respect to full width at half maximum (FWHM) of the Gaussian smoothing applied to the FA images. Raw FA maps with no smoothing yielded the greatest sensitivity to detect gene effects, when corrected for multiple comparisons using the false discovery rate (FDR) procedure. The BDNF polymorphism significantly contributed to the variation in FA in the posterior cingulate gyrus, where it accounted for around 90-95% of the total variance in FA. Our study generated the first maps to visualize the effect of the BDNF gene on brain fiber integrity, suggesting that common genetic variants may strongly determine white matter integrity.
Resumo:
The human connectome has recently become a popular research topic in neuroscience, and many new algorithms have been applied to analyze brain networks. In particular, network topology measures from graph theory have been adapted to analyze network efficiency and 'small-world' properties. While there has been a surge in the number of papers examining connectivity through graph theory, questions remain about its test-retest reliability (TRT). In particular, the reproducibility of structural connectivity measures has not been assessed. We examined the TRT of global connectivity measures generated from graph theory analyses of 17 young adults who underwent two high-angular resolution diffusion (HARDI) scans approximately 3 months apart. Of the measures assessed, modularity had the highest TRT, and it was stable across a range of sparsities (a thresholding parameter used to define which network edges are retained). These reliability measures underline the need to develop network descriptors that are robust to acquisition parameters.
Resumo:
This paper presents a numerical study of the response of axially loaded concrete filled steel tube (CFST) columns under lateral impact loading using explicit non-linear finite element techniques. The aims of this paper are to evaluate the vulnerability of existing columns to credible impact events as well as to contribute new information towards the safe design of such vulnerable columns. The model incorporates concrete confinement, strain rate effects of steel and concrete, contact between the steel tube and concrete and dynamic relaxation for pre-loading, which is a relatively recent method for applying a pre-loading in the explicit solver. The finite element model was first verified by comparing results with existing experimental results and then employed to conduct a parametric sensitivity analysis. The effects of various structural and load parameters on the impact response of the CFST column were evaluated to identify the key controlling factors. Overall, the major parameters which influence the impact response of the column are the steel tube thickness to diameter ratio, the slenderness ratio and the impact velocity. The findings of this study will enhance the current state of knowledge in this area and can serve as a benchmark reference for future analysis and design of CFST columns under lateral impact.
Resumo:
Huge amount of data are generated from a variety of information sources in healthcare while the data sources originate from a veracity of clinical information systems and corporate data warehouses. The data derived from the above data sources are used for analysis and trending purposes thus playing an influential role as a real time decision-making tool. The unstructured, narrative data provided by these data sources qualify as healthcare big-data and researchers argue that the application of big-data in healthcare might enable the accountability and efficiency.
Resumo:
As of today, user-generated information such as online reviews has become increasingly significant for customers in decision making process. Meanwhile, as the volume of online reviews proliferates, there is an insistent demand to help the users tackle the information overload problem. In order to extract useful information from overwhelming reviews, considerable work has been proposed such as review summarization and review selection. Particularly, to avoid the redundant information, researchers attempt to select a small set of reviews to represent the entire review corpus by preserving its statistical properties (e.g., opinion distribution). However, one significant drawback of the existing works is that they only measure the utility of the extracted reviews as a whole without considering the quality of each individual review. As a result, the set of chosen reviews may consist of low-quality ones even its statistical property is close to that of the original review corpus, which is not preferred by the users. In this paper, we proposed a review selection method which takes review quality into consideration during the selection process. Specifically, we examine the relationships between product features based upon a domain ontology to capture the review characteristics based on which to select reviews that have good quality and preserve the opinion distribution as well. Our experimental results based on real world review datasets demonstrate that our proposed approach is feasible and able to improve the performance of the review selection effectively.
Resumo:
Background Chlamydia (C.) trachomatis is the most prevalent bacterial sexually transmitted infection worldwide and the leading cause of preventable blindness. Genetic approaches to investigate C. trachomatis have been only recently developed due to the organism’s intracellular developmental cycle. HtrA is a critical stress response serine protease and chaperone for many bacteria and in C. trachomatis has been previously shown to be important for heat stress and the replicative phase of development using a chemical inhibitor of the CtHtrA activity. In this study, chemically-induced SNVs in the cthtrA gene that resulted in amino acid substitutions (A240V, G475E, and P370L) were identified and characterized. Methods SNVs were initially biochemically characterized in vitro using recombinant protein techniques to confirm a functional impact on proteolysis. The C. trachomatis strains containing the SNVs with marked reductions in proteolysis were investigated in cell culture to identify phenotypes that could be linked to CtHtrA function. Results The strain harboring the SNV with the most marked impact on proteolysis (cthtrAP370L) was detected to have a significant reduction in the production of infectious elementary bodies. Conclusions This provides genetic evidence that CtHtrA is critical for the C. trachomatis developmental cycle.
Resumo:
Computer modelling has been used extensively in some processes in the sugar industry to achieve significant gains. This paper reviews the investigations carried out over approximately the last twenty five years, including the successes but also areas where problems and delays have been encountered. In that time the capability of both hardware and software have increased dramatically. For some processes such as cane cleaning, cane billet preparation, and sugar drying, the application of computer modelling towards improved equipment design and operation has been quite limited. A particular problem has been the large number of particles and particle interactions in these…
Resumo:
During their entire lives, people are exposed to the pollutants present in indoor air. Recently, Electronic Nicotine Delivery Systems, mainly known as electronic cigarettes, have been widely commercialized: they deliver particles into the lungs of the users but a “second-hand smoke” has yet to be associated to this indoor source. On the other hand, the naturally-occurring radioactive gas, i.e. radon, represents a significant risk for lung cancer, and the cumulative action of these two agents could be worse than the agents separately would. In order to deepen the interaction between radon progeny and second-hand aerosol from different types of cigarettes, a designed experimental study was carried out by generating aerosol from e-cigarette vaping as well as from second-hand traditional smoke inside a walk-in radon chamber at the National Institute of Ionizing Radiation Metrology (INMRI) of Italy. In this chamber, the radon present in air comes naturally from the floor and ambient conditions are controlled. To characterize the sidestream smoke emitted by cigarettes, condensation particle counters and scanning mobility particle sizer were used. Radon concentration in the air was measured through an Alphaguard ionization chamber, whereas the measurement of radon decay product in the air was performed with the Tracelab BWLM Plus-2S Radon daughter Monitor. It was found an increase of the Potential Alpha-Energy Concentration (PAEC) due to the radon decay products attached to aerosol for higher particle number concentrations. This varied from 7.47 ± 0.34 MeV L−1 to 12.6 ± 0.26 MeV L−1 (69%) for the e-cigarette. In the case of traditional cigarette and at the same radon concentration, the increase was from 14.1 ± 0.43 MeV L−1 to 18.6 ± 0.19 MeV L−1 (31%). The equilibrium factor increases, varying from 23.4% ± 1.11% to 29.5% ± 0.26% and from 30.9% ± 1.0% to 38.1 ± 0.88 for the e-cigarette and traditional cigarette, respectively. These growths still continue for long time after the combustion, by increasing the exposure risk.
Resumo:
Companies such as NeuroSky and Emotiv Systems are selling non-medical EEG devices for human computer interaction. These devices are significantly more affordable than their medical counterparts, and are mainly used to measure levels of engagement, focus, relaxation and stress. This information is sought after for marketing research and games. However, these EEG devices have the potential to enable users to interact with their surrounding environment using thoughts only, without activating any muscles. In this paper, we present preliminary results that demonstrate that despite reduced voltage and time sensitivity compared to medical-grade EEG systems, the quality of the signals of the Emotiv EPOC neuroheadset is sufficiently good in allowing discrimina tion between imaging events. We collected streams of EEG raw data and trained different types of classifiers to discriminate between three states (rest and two imaging events). We achieved a generalisation error of less than 2% for two types of non-linear classifiers.
Resumo:
When a puzzle game is created, its design parameters must be chosen to allow solvable and interesting challenges to be created for the player. We investigate the use of random sampling as a computationally inexpensive means of automated game analysis, to evaluate the BoxOff family of puzzle games. This analysis reveals useful insights into the game, such as the surprising fact that almost 100% of randomly generated challenges have a solution, but less than 10% will be solved using strictly random play, validating the inventor’s design choices. We show the 1D game to be trivial and the 3D game to be viable.
Resumo:
Bird species richness survey is one of the most intriguing ecological topics for evaluating environmental health. Here, bird species richness denotes the number of unique bird species in a particular area. Factors affecting the investigation of bird species richness include weather, observation bias, and most importantly, the prohibitive costs of conducting surveys at large spatiotemporal scales. Thanks to advances in recording techniques, these problems have been alleviated by deploying sensors for acoustic data collection. Although automated detection techniques have been introduced to identify various bird species, the innate complexity of bird vocalizations, the background noise present in the recording and the escalating volumes of acoustic data pose a challenging task on determination of bird species richness. In this paper we proposed a two-step computer-assisted sampling approach for determining bird species richness in one-day acoustic data. First, a classification model is built based on acoustic indices for filtering out minutes that contain few bird species. Then the classified bird minutes are ordered by an acoustic index and the redundant temporal minutes are removed from the ranked minute sequence. The experimental results show that our method is more efficient in directing experts for determination of bird species compared with the previous methods.
Resumo:
In order to understand the role of translational modes in the orientational relaxation in dense dipolar liquids, we have carried out a computer ''experiment'' where a random dipolar lattice was generated by quenching only the translational motion of the molecules of an equilibrated dipolar liquid. The lattice so generated was orientationally disordered and positionally random. The detailed study of orientational relaxation in this random dipolar lattice revealed interesting differences from those of the corresponding dipolar liquid. In particular, we found that the relaxation of the collective orientational correlation functions at the intermediate wave numbers was markedly slower at the long times for the random lattice than that of the liquid. This verified the important role of the translational modes in this regime, as predicted recently by the molecular theories. The single-particle orientational correlation functions of the random lattice also decayed significantly slowly at long times, compared to those of the dipolar liquid.
Resumo:
Document clustering is one of the prominent methods for mining important information from the vast amount of data available on the web. However, document clustering generally suffers from the curse of dimensionality. Providentially in high dimensional space, data points tend to be more concentrated in some areas of clusters. We take advantage of this phenomenon by introducing a novel concept of dynamic cluster representation named as loci. Clusters’ loci are efficiently calculated using documents’ ranking scores generated from a search engine. We propose a fast loci-based semi-supervised document clustering algorithm that uses clusters’ loci instead of conventional centroids for assigning documents to clusters. Empirical analysis on real-world datasets shows that the proposed method produces cluster solutions with promising quality and is substantially faster than several benchmarked centroid-based semi-supervised document clustering methods.
Resumo:
Assuming the grinding wheel surface to be fractal in nature, the maximum envelope profile of the wheel and contact deflections are estimated over a range of length scales. This gives an estimate of the 'no wear' roughness of a surface ground metal. Four test materials, aluminum, copper, titanium, and steel are surface ground and their surface power spectra were estimated. The departure of this power spectra from the 'no wear' estimates is studied in terms of the traction-induced wear damage of the surfaces. The surface power spectra in grinding are influenced by hardness and the power is enhanced by wear damage. No such correlation with hardness was found for the polished surface, the roughness of which is insensitive to mechanical properties and appears to be influenced by microstructure and physical properties of the material.
Resumo:
Many websites presently provide the facility for users to rate items quality based on user opinion. These ratings are used later to produce item reputation scores. The majority of websites apply the mean method to aggregate user ratings. This method is very simple and is not considered as an accurate aggregator. Many methods have been proposed to make aggregators produce more accurate reputation scores. In the majority of proposed methods the authors use extra information about the rating providers or about the context (e.g. time) in which the rating was given. However, this information is not available all the time. In such cases these methods produce reputation scores using the mean method or other alternative simple methods. In this paper, we propose a novel reputation model that generates more accurate item reputation scores based on collected ratings only. Our proposed model embeds statistical data, previously disregarded, of a given rating dataset in order to enhance the accuracy of the generated reputation scores. In more detail, we use the Beta distribution to produce weights for ratings and aggregate ratings using the weighted mean method. Experiments show that the proposed model exhibits performance superior to that of current state-of-the-art models.