881 resultados para Large-scale analysis
Resumo:
Wydział Nauk Politycznych i Dziennikarstwa
Resumo:
Wydział Biologii
Resumo:
The development of Next Generation Sequencing promotes Biology in the Big Data era. The ever-increasing gap between proteins with known sequences and those with a complete functional annotation requires computational methods for automatic structure and functional annotation. My research has been focusing on proteins and led so far to the development of three novel tools, DeepREx, E-SNPs&GO and ISPRED-SEQ, based on Machine and Deep Learning approaches. DeepREx computes the solvent exposure of residues in a protein chain. This problem is relevant for the definition of structural constraints regarding the possible folding of the protein. DeepREx exploits Long Short-Term Memory layers to capture residue-level interactions between positions distant in the sequence, achieving state-of-the-art performances. With DeepRex, I conducted a large-scale analysis investigating the relationship between solvent exposure of a residue and its probability to be pathogenic upon mutation. E-SNPs&GO predicts the pathogenicity of a Single Residue Variation. Variations occurring on a protein sequence can have different effects, possibly leading to the onset of diseases. E-SNPs&GO exploits protein embeddings generated by two novel Protein Language Models (PLMs), as well as a new way of representing functional information coming from the Gene Ontology. The method achieves state-of-the-art performances and is extremely time-efficient when compared to traditional approaches. ISPRED-SEQ predicts the presence of Protein-Protein Interaction sites in a protein sequence. Knowing how a protein interacts with other molecules is crucial for accurate functional characterization. ISPRED-SEQ exploits a convolutional layer to parse local context after embedding the protein sequence with two novel PLMs, greatly surpassing the current state-of-the-art. All methods are published in international journals and are available as user-friendly web servers. They have been developed keeping in mind standard guidelines for FAIRness (FAIR: Findable, Accessible, Interoperable, Reusable) and are integrated into the public collection of tools provided by ELIXIR, the European infrastructure for Bioinformatics.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.
Resumo:
Aircraft manufacturing industries are looking for solutions in order to increase their productivity. One of the solutions is to apply the metrology systems during the production and assembly processes. Metrology Process Model (MPM) (Maropoulos et al, 2007) has been introduced which emphasises metrology applications with assembly planning, manufacturing processes and product designing. Measurability analysis is part of the MPM and the aim of this analysis is to check the feasibility for measuring the designed large scale components. Measurability Analysis has been integrated in order to provide an efficient matching system. Metrology database is structured by developing the Metrology Classification Model. Furthermore, the feature-based selection model is also explained. By combining two classification models, a novel approach and selection processes for integrated measurability analysis system (MAS) are introduced and such integrated MAS could provide much more meaningful matching results for the operators. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
One of the global phenomena with threats to environmental health and safety is artisanal mining. There are ambiguities in the manner in which an ore-processing facility operates which hinders the mining capacity of these miners in Ghana. These problems are reviewed on the basis of current socio-economic, health and safety, environmental, and use of rudimentary technologies which limits fair-trade deals to miners. This research sought to use an established data-driven, geographic information (GIS)-based system employing the spatial analysis approach for locating a centralized processing facility within the Wassa Amenfi-Prestea Mining Area (WAPMA) in the Western region of Ghana. A spatial analysis technique that utilizes ModelBuilder within the ArcGIS geoprocessing environment through suitability modeling will systematically and simultaneously analyze a geographical dataset of selected criteria. The spatial overlay analysis methodology and the multi-criteria decision analysis approach were selected to identify the most preferred locations to site a processing facility. For an optimal site selection, seven major criteria including proximity to settlements, water resources, artisanal mining sites, roads, railways, tectonic zones, and slopes were considered to establish a suitable location for a processing facility. Site characterizations and environmental considerations, incorporating identified constraints such as proximity to large scale mines, forest reserves and state lands to site an appropriate position were selected. The analysis was limited to criteria that were selected and relevant to the area under investigation. Saaty’s analytical hierarchy process was utilized to derive relative importance weights of the criteria and then a weighted linear combination technique was applied to combine the factors for determination of the degree of potential site suitability. The final map output indicates estimated potential sites identified for the establishment of a facility centre. The results obtained provide intuitive areas suitable for consideration
Resumo:
The assessment of adolescent drinking behavior is a complex task, complicated by variability in drinking patterns, the transitory and developmental nature of the behavior and the reliance (for large scale studies) on self-report questionnaires. The Adolescent Alcohol Involvement Scale (Mayer & Filstead, 1979) is a 14-item screening tool designed to help to identify alcohol misusers or more problematic drinkers. The present study utilized a large sample (n = 4066) adolescents from Northern Ireland. Results of Confirmatory Factor Analyses and reliability estimates revealed that the 14-items share sufficient common variance that scores can be considered to be reliable and that the 14 items can be scored to provide a composite alcohol use score.
Resumo:
The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).
Resumo:
Aerosol samples were collected at a pasture site in the Amazon Basin as part of the project LBA-SMOCC-2002 (Large-Scale Biosphere-Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall and Climate: Aerosols from Biomass Burning Perturb Global and Regional Climate). Sampling was conducted during the late dry season, when the aerosol composition was dominated by biomass burning emissions, especially in the submicron fraction. A 13-stage Dekati low-pressure impactor (DLPI) was used to collect particles with nominal aerodynamic diameters (D(p)) ranging from 0.03 to 0.10 mu m. Gravimetric analyses of the DLPI substrates and filters were performed to obtain aerosol mass concentrations. The concentrations of total, apparent elemental, and organic carbon (TC, EC(a), and OC) were determined using thermal and thermal-optical analysis (TOA) methods. A light transmission method (LTM) was used to determine the concentration of equivalent black carbon (BC(e)) or the absorbing fraction at 880 nm for the size-resolved samples. During the dry period, due to the pervasive presence of fires in the region upwind of the sampling site, concentrations of fine aerosols (D(p) < 2.5 mu m: average 59.8 mu g m(-3)) were higher than coarse aerosols (D(p) > 2.5 mu m: 4.1 mu g m(-3)). Carbonaceous matter, estimated as the sum of the particulate organic matter (i.e., OC x 1.8) plus BC(e), comprised more than 90% to the total aerosol mass. Concentrations of EC(a) (estimated by thermal analysis with a correction for charring) and BC(e) (estimated by LTM) averaged 5.2 +/- 1.3 and 3.1 +/- 0.8 mu g m(-3), respectively. The determination of EC was improved by extracting water-soluble organic material from the samples, which reduced the average light absorption Angstrom exponent of particles in the size range of 0.1 to 1.0 mu m from >2.0 to approximately 1.2. The size-resolved BC(e) measured by the LTM showed a clear maximum between 0.4 and 0.6 mu m in diameter. The concentrations of OC and BC(e) varied diurnally during the dry period, and this variation is related to diurnal changes in boundary layer thickness and in fire frequency.
Resumo:
The flowpaths by which water moves from watersheds to streams has important consequences for the runoff dynamics and biogeochemistry of surface waters in the Amazon Basin. The clearing of Amazon forest to cattle pasture has the potential to change runoff sources to streams by shifting runoff to more surficial flow pathways. We applied end-member mixing analysis (EMMA) to 10 small watersheds throughout the Amazon in which solute composition of streamwater and groundwater, overland flow, soil solution, throughfall and rainwater were measured, largely as part of the Large-Scale Biosphere-Atmosphere Experiment in Amazonia. We found a range in the extent to which streamwater samples fell within the mixing space determined by potential flowpath end-members, suggesting that some water sources to streams were not sampled. The contribution of overland flow as a source of stream flow was greater in pasture watersheds than in forest watersheds of comparable size. Increases in overland flow contribution to pasture streams ranged in some cases from 0% in forest to 27-28% in pasture and were broadly consistent with results from hydrometric sampling of Amazon forest and pasture watersheds that indicate 17- to 18-fold increase in the overland flow contribution to stream flow in pastures. In forest, overland flow was an important contribution to stream flow (45-57%) in ephemeral streams where flows were dominated by stormflow. Overland flow contribution to stream flow decreased in importance with increasing watershed area, from 21 to 57% in forest and 60-89% in pasture watersheds of less than 10 ha to 0% in forest and 27-28% in pastures in watersheds greater than 100 ha. Soil solution contributions to stream flow were similar across watershed area and groundwater inputs generally increased in proportion to decreases in overland flow. Application of EMMA across multiple watersheds indicated patterns across gradients of stream size and land cover that were consistent with patterns determined by detailed hydrometric sampling.
Resumo:
The performance optimisation of overhead conductors depends on the systematic investigation of the fretting fatigue mechanisms in the conductor/clamping system. As a consequence, a fretting fatigue rig was designed and a limited range of fatigue tests was carried out at the middle high cycle fatigue regime in order to access an exploratory S-N curve for a Grosbeak conductor, which was mounted on a mono-articulated aluminium clamping system. Subsequent to these preliminary fatigue tests, the components of the conductor/clamping system, such as ACSR conductor, upper and lower clamps, bolt and nuts, were subjected to a failure analysis procedure in order to investigate the metallurgical free variables interfering on the fatigue test results, aiming at the optimisation of the testing reproducibility. The results indicated that the rupture of the planar fracture surfaces observed in the external At strands of the conductor tested under lower bending amplitude (0.9 mm) occurred by fatigue cracking (I mm deep), followed by shear overload. The V-type fracture surfaces observed in some At strands of the conductor tested under higher bending amplitude (1.3 mm) were also produced by fatigue cracking (approximately 400 mu m deep), followed by shear overload. Shear overload fracture (45 degrees fracture surface) was also observed on the remaining At wires of the conductor tested under higher bending amplitude (1.3 mm). Additionally, the upper and lower Al-cast clamps presented microstructure-sensitive cracking, which was folowed by particle detachment and formation of abrasive debris on the clamp/conductor tribo-interface, promoting even further the fretting mechanism. The detrimental formation of abrasive debris might be inhibited by the selection of a more suitable class of as-cast At alloy for the production of clamps. Finally, the bolt/nut system showed intense degradation of the carbon steel nut (fabricated in ferritic-pearlitic carbon steel, featuring machined threads with 190 HV), with intense plastic deformation and loss of material. Proper selection of both the bolt and nut materials and the finishing processing might prevent the loss in the clamping pressure during the fretting testing. It is important to control the specification of these components (clamps, bolt and nuts) prior to the start of large scale fretting fatigue testing of the overhead conductors in order to increase the reproducibility of this assessment. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A hydraulic jump is characterized by strong energy dissipation and mixing, large-scale turbulence, air entrainment, waves and spray. Despite recent pertinent studies, the interaction between air bubbles diffusion and momentum transfer is not completely understood. The objective of this paper is to present experimental results from new measurements performed in rectangular horizontal flume with partially-developed inflow conditions. The vertical distributions of void fraction and air bubbles count rate were recorded for inflow Froude number Fr1 in the range from 5.2 to 14.3. Rapid detrainment process was observed near the jump toe, whereas the structure of the air diffusion layer was clearly observed over longer distances. These new data were compared with previous data generally collected at lower Froude numbers. The comparison demonstrated that, at a fixed distance from the jump toe, the maximum void fraction Cmax increases with the increasing Fr1. The vertical locations of the maximum void fraction and bubble count rate were consistent with previous studies. Finally, an empirical correlation between the upper boundary of the air diffusion layer and the distance from the impingement point was provided.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.