952 resultados para Computational analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies multidimensional scaling techniques and Fourier transform for visualizing possible time-varying correlations between 25 stock market values. The method is useful for observing clusters of stock markets with similar behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous cell lines that proliferate in chemically defined and simple media have been highly regarded as suitable alternatives for vaccine production. One such cell line is the AG1.CR.pIX avian cell line developed by PROBIOGEN. This cell line can be cultivated in a fully scalable suspension culture and adapted to grow in chemically defined, calf serum free, medium [1]–[5]. The medium composition and cultivation strategy are important factors for reaching high virus titers. In this project, a series of computational methods was used to simulate the cell’s response to different environments. The study is based on the metabolic model of the central metabolism proposed in [1]. In a first step, Metabolic Flux Analysis (MFA) was used along with measured uptake and secretion fluxes to estimate intracellular flux values. The network and data were found to be consistent. In a second step, Flux Balance Analysis (FBA) was performed to access the cell’s biological objective. The objective that resulted in the best predicted results fit to the experimental data was the minimization of oxidative phosphorylation. Employing this objective, in the next step Flux Variability Analysis (FVA) was used to characterize the flux solution space. Furthermore, various scenarios, where a reaction deletion (elimination of the compound from the media) was simulated, were performed and the flux solution space for each scenario was calculated. Growth restrictions caused by essential and non-essential amino acids were accurately predicted. Fluxes related to the essential amino acids uptake and catabolism, the lipid synthesis and ATP production via TCA were found to be essential to exponential growth. Finally, the data gathered during the previous steps were analyzed using principal component analysis (PCA), in order to assess potential changes in the physiological state of the cell. Three metabolic states were found, which correspond to zero, partial and maximum biomass growth rate. Elimination of non-essential amino acids or pyruvate from the media showed no impact on the cell’s assumed normal metabolic state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural analysis involves the definition of the model and selection of the analysis type. The model should represent the stiffness, the mass and the loads of the structure. The structures can be represented using simplified models, such as the lumped mass models, and advanced models resorting the Finite Element Method (FEM) and Discrete Element Method (DEM). Depending on the characteristics of the structure, different types of analysis can be used such as limit analysis, linear and non-linear static analysis and linear and non-linear dynamic analysis. Unreinforced masonry structures present low tensile strength and the linear analyses seem to not be adequate for assessing their structural behaviour. On the other hand, the static and dynamic non-linear analyses are complex, since they involve large time computational requirements and advanced knowledge of the practitioner. The non-linear analysis requires advanced knowledge on the material properties, analysis tools and interpretation of results. The limit analysis with macro-blocks can be assumed as a more practical method in the estimation of maximum load capacity of structure. Furthermore, the limit analysis require a reduced number of parameters, which is an advantage for the assessment of ancient and historical masonry structures, due to the difficult in obtaining reliable data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering that vernacular architecture may bear important lessons on hazard mitigation and that well-constructed examples showing traditional seismic resistant features can present far less vulnerability than expected, this study aims at understanding the resisting mechanisms and seismic behavior of vernacular buildings through detailed finite element modeling and nonlinear static (pushover) analysis. This paper focuses specifically on a type of vernacular rammed earth constructions found in the Portuguese region of Alentejo. Several rammed earth constructions found in the region were selected and studied in terms of dimensions, architectural layout, structural solutions, construction materials and detailing and, as a result, a reference model was built, which intends to be a simplified representative example of these constructions, gathering the most common characteristics. Different parameters that may affect the seismic response of this type of vernacular constructions have been identified and a numerical parametric study was defined aiming at evaluating and quantifying their influence in the seismic behavior of this type of vernacular buildings. This paper is part of an ongoing research which includes the development of a simplified methodology for assessing the seismic vulnerability of vernacular buildings, based on vulnerability index evaluation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of genome-scale metabolic models has been rapidly increasing in fields such as metabolic engineering. An important part of a metabolic model is the biomass equation since this reaction will ultimately determine the predictive capacity of the model in terms of essentiality and flux distributions. Thus, in order to obtain a reliable metabolic model the biomass precursors and their coefficients must be as precise as possible. Ideally, determination of the biomass composition would be performed experimentally, but when no experimental data are available this is established by approximation to closely related organisms. Computational methods however, can extract some information from the genome such as amino acid and nucleotide compositions. The main objectives of this study were to compare the biomass composition of several organisms and to evaluate how biomass precursor coefficients affected the predictability of several genome-scale metabolic models by comparing predictions with experimental data in literature. For that, the biomass macromolecular composition was experimentally determined and the amino acid composition was both experimentally and computationally estimated for several organisms. Sensitivity analysis studies were also performed with the Escherichia coli iAF1260 metabolic model concerning specific growth rates and flux distributions. The results obtained suggest that the macromolecular composition is conserved among related organisms. Contrasting, experimental data for amino acid composition seem to have no similarities for related organisms. It was also observed that the impact of macromolecular composition on specific growth rates and flux distributions is larger than the impact of amino acid composition, even when data from closely related organisms are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as Nuclear Magnetic Resonance, Gas or Liquid Chromatography, Mass Spectrometry, Infrared and UV-visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several researchers seek methods for the selection of homogeneous groups of animals in experimental studies, a fact justified because homogeneity is an indispensable prerequisite for casualization of treatments. The lack of robust methods that comply with statistical and biological principles is the reason why researchers use empirical or subjective methods, influencing their results. Objective: To develop a multivariate statistical model for the selection of a homogeneous group of animals for experimental research and to elaborate a computational package to use it. Methods: The set of echocardiographic data of 115 male Wistar rats with supravalvular aortic stenosis (AoS) was used as an example of model development. Initially, the data were standardized, and became dimensionless. Then, the variance matrix of the set was submitted to principal components analysis (PCA), aiming at reducing the parametric space and at retaining the relevant variability. That technique established a new Cartesian system into which the animals were allocated, and finally the confidence region (ellipsoid) was built for the profile of the animals’ homogeneous responses. The animals located inside the ellipsoid were considered as belonging to the homogeneous batch; those outside the ellipsoid were considered spurious. Results: The PCA established eight descriptive axes that represented the accumulated variance of the data set in 88.71%. The allocation of the animals in the new system and the construction of the confidence region revealed six spurious animals as compared to the homogeneous batch of 109 animals. Conclusion: The biometric criterion presented proved to be effective, because it considers the animal as a whole, analyzing jointly all parameters measured, in addition to having a small discard rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PEEC, computational electromagnetics