798 resultados para Data-Intensive Science


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The discovery of new materials and their functions has always been a fundamental component of technological progress. Nowadays, the quest for new materials is stronger than ever: sustainability, medicine, robotics and electronics are all key assets which depend on the ability to create specifically tailored materials. However, designing materials with desired properties is a difficult task, and the complexity of the discipline makes it difficult to identify general criteria. While scientists developed a set of best practices (often based on experience and expertise), this is still a trial-and-error process. This becomes even more complex when dealing with advanced functional materials. Their properties depend on structural and morphological features, which in turn depend on fabrication procedures and environment, and subtle alterations leads to dramatically different results. Because of this, materials modeling and design is one of the most prolific research fields. Many techniques and instruments are continuously developed to enable new possibilities, both in the experimental and computational realms. Scientists strive to enforce cutting-edge technologies in order to make progress. However, the field is strongly affected by unorganized file management, proliferation of custom data formats and storage procedures, both in experimental and computational research. Results are difficult to find, interpret and re-use, and a huge amount of time is spent interpreting and re-organizing data. This also strongly limit the application of data-driven and machine learning techniques. This work introduces possible solutions to the problems described above. Specifically, it talks about developing features for specific classes of advanced materials and use them to train machine learning models and accelerate computational predictions for molecular compounds; developing method for organizing non homogeneous materials data; automate the process of using devices simulations to train machine learning models; dealing with scattered experimental data and use them to discover new patterns.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nel panorama aziendale odierno, risulta essere di fondamentale importanza la capacità, da parte di un’azienda o di una società di servizi, di orientare in modo programmatico la propria innovazione in modo tale da poter essere competitivi sul mercato. In molti casi, questo e significa investire una cospicua somma di denaro in progetti che andranno a migliorare aspetti essenziali del prodotto o del servizio e che avranno un importante impatto sulla trasformazione digitale dell’azienda. Lo studio che viene proposto riguarda in particolar modo due approcci che sono tipicamente in antitesi tra loro proprio per il fatto che si basano su due tipologie di dati differenti, i Big Data e i Thick Data. I due approcci sono rispettivamente il Data Science e il Design Thinking. Nel corso dei seguenti capitoli, dopo aver definito gli approcci di Design Thinking e Data Science, verrà definito il concetto di blending e la problematica che ruota attorno all’intersezione dei due metodi di innovazione. Per mettere in evidenza i diversi aspetti che riguardano la tematica, verranno riportati anche casi di aziende che hanno integrato i due approcci nei loro processi di innovazione, ottenendo importanti risultati. In particolar modo verrà riportato il lavoro di ricerca svolto dall’autore riguardo l'esame, la classificazione e l'analisi della letteratura esistente all'intersezione dell'innovazione guidata dai dati e dal pensiero progettuale. Infine viene riportato un caso aziendale che è stato condotto presso la realtà ospedaliero-sanitaria di Parma in cui, a fronte di una problematica relativa al rapporto tra clinici dell’ospedale e clinici del territorio, si è progettato un sistema innovativo attraverso l’utilizzo del Design Thinking. Inoltre, si cercherà di sviluppare un’analisi critica di tipo “what-if” al fine di elaborare un possibile scenario di integrazione di metodi o tecniche provenienti anche dal mondo del Data Science e applicarlo al caso studio in oggetto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart's field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time. Methods/Principal Findings: We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of ""what if'' situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal. Conclusion/Significance: The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Melanoma is a highly aggressive and therapy resistant tumor for which the identification of specific markers and therapeutic targets is highly desirable. We describe here the development and use of a bioinformatic pipeline tool, made publicly available under the name of EST2TSE, for the in silico detection of candidate genes with tissue-specific expression. Using this tool we mined the human EST (Expressed Sequence Tag) database for sequences derived exclusively from melanoma. We found 29 UniGene clusters of multiple ESTs with the potential to predict novel genes with melanoma-specific expression. Using a diverse panel of human tissues and cell lines, we validated the expression of a subset of three previously uncharacterized genes (clusters Hs.295012, Hs.518391, and Hs.559350) to be highly restricted to melanoma/melanocytes and named them RMEL1, 2 and 3, respectively. Expression analysis in nevi, primary melanomas, and metastatic melanomas revealed RMEL1 as a novel melanocytic lineage-specific gene up-regulated during melanoma development. RMEL2 expression was restricted to melanoma tissues and glioblastoma. RMEL3 showed strong up-regulation in nevi and was lost in metastatic tumors. Interestingly, we found correlations of RMEL2 and RMEL3 expression with improved patient outcome, suggesting tumor and/or metastasis suppressor functions for these genes. The three genes are composed of multiple exons and map to 2q12.2, 1q25.3, and 5q11.2, respectively. They are well conserved throughout primates, but not other genomes, and were predicted as having no coding potential, although primate-conserved and human-specific short ORFs could be found. Hairpin RNA secondary structures were also predicted. Concluding, this work offers new melanoma-specific genes for future validation as prognostic markers or as targets for the development of therapeutic strategies to treat melanoma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Without intensive selection, the majority of bovine oocytes submitted to in vitro embryo production (IVP) fail to develop to the blastocyst stage. This is attributed partly to their maturation status and competences. Using the Affymetrix GeneChip Bovine Genome Array, global mRNA expression analysis of immature (GV) and in vitro matured (IVM) bovine oocytes was carried out to characterize the transcriptome of bovine oocytes and then use a variety of approaches to determine whether the observed transcriptional changes during IVM was real or an artifact of the techniques used during analysis. Results: 8489 transcripts were detected across the two oocyte groups, of which similar to 25.0% (2117 transcripts) were differentially expressed (p < 0.001); corresponding to 589 over-expressed and 1528 under-expressed transcripts in the IVM oocytes compared to their immature counterparts. Over expression of transcripts by IVM oocytes is particularly interesting, therefore, a variety of approaches were employed to determine whether the observed transcriptional changes during IVM were real or an artifact of the techniques used during analysis, including the analysis of transcript abundance in oocytes in vitro matured in the presence of a-amanitin. Subsets of the differentially expressed genes were also validated by quantitative real-time PCR (qPCR) and the gene expression data was classified according to gene ontology and pathway enrichment. Numerous cell cycle linked (CDC2, CDK5, CDK8, HSPA2, MAPK14, TXNL4B), molecular transport (STX5, STX17, SEC22A, SEC22B), and differentiation (NACA) related genes were found to be among the several over-expressed transcripts in GV oocytes compared to the matured counterparts, while ANXA1, PLAU, STC1and LUM were among the over-expressed genes after oocyte maturation. Conclusion: Using sequential experiments, we have shown and confirmed transcriptional changes during oocyte maturation. This dataset provides a unique reference resource for studies concerned with the molecular mechanisms controlling oocyte meiotic maturation in cattle, addresses the existing conflicting issue of transcription during meiotic maturation and contributes to the global goal of improving assisted reproductive technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The A1763 superstructure at z = 0.23 contains the first galaxy filament to be directly detected using mid-infrared observations. Our previous work has shown that the frequency of starbursting galaxies, as characterized by 24 mu m emission is much higher within the filament than at either the center of the rich galaxy cluster, or the field surrounding the system. New Very Large Array and XMM-Newton data are presented here. We use the radio and X-ray data to examine the fraction and location of active galaxies, both active galactic nuclei (AGNs) and starbursts (SBs). The radio far-infrared correlation, X-ray point source location, IRAC colors, and quasar positions are all used to gain an understanding of the presence of dominant AGNs. We find very few MIPS-selected galaxies that are clearly dominated by AGN activity. Most radio-selected members within the filament are SBs. Within the supercluster, three of eight spectroscopic members detected both in the radio and in the mid-infrared are radio-bright AGNs. They are found at or near the core of A1763. The five SBs are located further along the filament. We calculate the physical properties of the known wide angle tail (WAT) source which is the brightest cluster galaxy of A1763. A second double lobe source is found along the filament well outside of the virial radius of either cluster. The velocity offset of the WAT from the X-ray centroid and the bend of the WAT in the intracluster medium are both consistent with ram pressure stripping, indicative of streaming motions along the direction of the filament. We consider this as further evidence of the cluster-feeding nature of the galaxy filament.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The HR Del nova remnant was observed with the IFU-GMOS at Gemini North. The spatially resolved spectral data cube was used in the kinematic, morphological, and abundance analysis of the ejecta. The line maps show a very clumpy shell with two main symmetric structures. The first one is the outer part of the shell seen in H alpha, which forms two rings projected in the sky plane. These ring structures correspond to a closed hourglass shape, first proposed by Harman & O'Brien. The equatorial emission enhancement is caused by the superimposed hourglass structures in the line of sight. The second structure seen only in the [O III] and [N II] maps is located along the polar directions inside the hourglass structure. Abundance gradients between the polar caps and equatorial region were not found. However, the outer part of the shell seems to be less abundant in oxygen and nitrogen than the inner regions. Detailed 2.5-dimensional photoionization modeling of the three-dimensional shell was performed using the mass distribution inferred from the observations and the presence of mass clumps. The resulting model grids are used to constrain the physical properties of the shell as well as the central ionizing source. A sequence of three-dimensional clumpy models including a disk-shaped ionization source is able to reproduce the ionization gradients between polar and equatorial regions of the shell. Differences between shell axial ratios in different lines can also be explained by aspherical illumination. A total shell mass of 9 x 10(-4) M(circle dot) is derived from these models. We estimate that 50%-70% of the shell mass is contained in neutral clumps with density contrast up to a factor of 30.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on an intensive observational campaign carried out with HARPS at the 3.6 m telescope at La Silla on the star CoRoT-7. Additional simultaneous photometric measurements carried out with the Euler Swiss telescope have demonstrated that the observed radial velocity variations are dominated by rotational modulation from cool spots on the stellar surface. Several approaches were used to extract the radial velocity signal of the planet(s) from the stellar activity signal. First, a simple pre-whitening procedure was employed to find and subsequently remove periodic signals from the complex frequency structure of the radial velocity data. The dominant frequency in the power spectrum was found at 23 days, which corresponds to the rotation period of CoRoT-7. The 0.8535 day period of CoRoT-7b planetary candidate was detected with an amplitude of 3.3 m s(-1). Most other frequencies, some with amplitudes larger than the CoRoT-7b signal, are most likely associated with activity. A second approach used harmonic decomposition of the rotational period and up to the first three harmonics to filter out the activity signal from radial velocity variations caused by orbiting planets. After correcting the radial velocity data for activity, two periodic signals are detected: the CoRoT-7b transit period and a second one with a period of 3.69 days and an amplitude of 4 m s(-1). This second signal was also found in the pre-whitening analysis. We attribute the second signal to a second, more remote planet CoRoT-7c. The orbital solution of both planets is compatible with circular orbits. The mass of CoRoT-7b is 4.8 +/- 0.8 (M(circle plus)) and that of CoRoT-7c is 8.4 +/- 0.9 (M(circle plus)), assuming both planets are on coplanar orbits. We also investigated the false positive scenario of a blend by a faint stellar binary, and this may be rejected by the stability of the bisector on a nightly scale. According to their masses both planets belong to the super-Earth planet category. The average density of CoRoT-7b is rho = 5.6 +/- 1.3 g cm(-3), similar to the Earth. The CoRoT-7 planetary system provides us with the first insight into the physical nature of short period super-Earth planets recently detected by radial velocity surveys. These planets may be denser than Neptune and therefore likely made of rocks like the Earth, or a mix of water ice and rocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tibolone is used for hormone reposition of postmenopause women and isotibolone is considered the major degradation product of tibolone. Isotibolone can also be present in tibolone API raw materials due to some inadequate synthesis. Its presence is then necessary to be identified and quantified in the quality control of both API and drug products. In this work we present the indexing of an isotibolone X-ray diffraction pattern measured with synchrotron light (lambda=1.2407 angstrom) in the transmission mode. The characterization of the isotibolone sample by IR spectroscopy, elemental analysis, and thermal analysis are also presented. The isotibolone crystallographic data are a=6.8066 angstrom, b=20.7350 angstrom, c=6.4489 angstrom, beta=76.428 degrees, V=884.75 angstrom(3), and space group P2(1), rho(o)= 1.187 g cm(-3), Z=2. (C) 2009 International Centre for Diffraction Data. [DOI: 10.1154/1.3257612]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agricultural management practices that promote net carbon (C) accumulation in the soil have been considered as an important potential mitigation option to combat global warming. The change in the sugarcane harvesting system, to one which incorporates C into the soil from crop residues, is the focus of this work. The main objective was to assess and discuss the changes in soil organic C stocks caused by the conversion of burnt to unburnt sugarcane harvesting systems in Brazil, when considering the main soils and climates associated with this crop. For this purpose, a dataset was obtained from a literature review of soils under sugarcane in Brazil. Although not necessarily from experimental studies, only paired comparisons were examined, and for each site the dominant soil type, topography and climate were similar. The results show a mean annual C accumulation rate of 1.5 Mg ha-1 year-1 for the surface to 30-cm depth (0.73 and 2.04 Mg ha-1 year-1 for sandy and clay soils, respectively) caused by the conversion from a burnt to an unburnt sugarcane harvesting system. The findings suggest that soil should be included in future studies related to life cycle assessment and C footprint of Brazilian sugarcane ethanol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P>Soil bulk density values are needed to convert organic carbon content to mass of organic carbon per unit area. However, field sampling and measurement of soil bulk density are labour-intensive, costly and tedious. Near-infrared reflectance spectroscopy (NIRS) is a physically non-destructive, rapid, reproducible and low-cost method that characterizes materials according to their reflectance in the near-infrared spectral region. The aim of this paper was to investigate the ability of NIRS to predict soil bulk density and to compare its performance with published pedotransfer functions. The study was carried out on a dataset of 1184 soil samples originating from a reforestation area in the Brazilian Amazon basin, and conventional soil bulk density values were obtained with metallic ""core cylinders"". The results indicate that the modified partial least squares regression used on spectral data is an alternative method for soil bulk density predictions to the published pedotransfer functions tested in this study. The NIRS method presented the closest-to-zero accuracy error (-0.002 g cm-3) and the lowest prediction error (0.13 g cm-3) and the coefficient of variation of the validation sets ranged from 8.1 to 8.9% of the mean reference values. Nevertheless, further research is required to assess the limits and specificities of the NIRS method, but it may have advantages for soil bulk density predictions, especially in environments such as the Amazon forest.