13 resultados para Computational analysis

em Instituto Politécnico do Porto, Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A quinoxalina e seus derivativos são uma importante classe de compostos heterocíclicos, onde os elementos N, S e O substituem átomos de carbono no anel. A fórmula molecular da quinoxalina é C8H6N2, formada por dois anéis aromáticos, benzeno e pirazina. É rara em estado natural, mas a sua síntese é de fácil execução. Modificações na estrutura da quinoxalina proporcionam uma grande variedade de compostos e actividades, tais como actividades antimicrobiana, antiparasitária, antidiabética, antiproliferativa, anti-inflamatória, anticancerígena, antiglaucoma, antidepressiva apresentando antagonismo do receptor AMPA. Estes compostos também são importantes no campo industrial devido, por exemplo, ao seu poder na inibição da corrosão do metal. A química computacional, ramo natural da química teórica é um método bem desenvolvido, utilizado para representar estruturas moleculares, simulando o seu comportamento com as equações da física quântica e clássica. Existe no mercado uma grande variedade de ferramentas informaticas utilizadas na química computacional, que permitem o cálculo de energias, geometrias, frequências vibracionais, estados de transição, vias de reação, estados excitados e uma variedade de propriedades baseadas em várias funções de onda não correlacionadas e correlacionadas. Nesta medida, a sua aplicação ao estudo das quinoxalinas é importante para a determinação das suas características químicas, permitindo uma análise mais completa, em menos tempo, e com menos custos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years several countries have set up policies that allow exchange of kidneys between two or more incompatible patient–donor pairs. These policies lead to what is commonly known as kidney exchange programs. The underlying optimization problems can be formulated as integer programming models. Previously proposed models for kidney exchange programs have exponential numbers of constraints or variables, which makes them fairly difficult to solve when the problem size is large. In this work we propose two compact formulations for the problem, explain how these formulations can be adapted to address some problem variants, and provide results on the dominance of some models over others. Finally we present a systematic comparison between our models and two previously proposed ones via thorough computational analysis. Results show that compact formulations have advantages over non-compact ones when the problem size is large.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

S100A6 is a small EF-hand calcium- and zinc-binding protein involved in the regulation of cell proliferation and cytoskeletal dynamics. It is overexpressed in neurodegenerative disorders and a proposed marker for Amyotrophic Lateral Sclerosis (ALS). Following recent reports of amyloid formation by S100 proteins, we investigated the aggregation properties of S100A6. Computational analysis using aggregation predictors Waltz and Zyggregator revealed increased propensity within S100A6 helices HI and HIV. Subsequent analysis of Thioflavin-T binding kinetics under acidic conditions elicited a very fast process with no lag phase and extensive formation of aggregates and stacked fibrils as observed by electron microscopy. Ca2+ exerted an inhibitory effect on the aggregation kinetics, which could be reverted upon chelation. An FT-IR investigation of the early conformational changes occurring under these conditions showed that Ca2+ promotes anti-parallel β-sheet conformations that repress fibrillation. At pH 7, Ca2+ rendered the fibril formation kinetics slower: time-resolved imaging showed that fibril formation is highly suppressed, with aggregates forming instead. In the absence of metals an extensive network of fibrils is formed. S100A6 oligomers, but not fibrils, were found to be cytotoxic, decreasing cell viability by up to 40%. This effect was not observed when the aggregates were formed in the presence of Ca2+. Interestingly, native S1006 seeds SOD1 aggregation, shortening its nucleation process. This suggests a cross-talk between these two proteins involved in ALS. Overall, these results put forward novel roles for S100 proteins, whose metal-modulated aggregation propensity may be a key aspect in their physiology and function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanisms of speech production are complex and have been raising attention from researchers of both medical and computer vision fields. In the speech production mechanism, the articulator’s study is a complex issue, since they have a high level of freedom along this process, namely the tongue, which instigates a problem in its control and observation. In this work it is automatically characterized the tongues shape during the articulation of the oral vowels of Portuguese European by using statistical modeling on MR-images. A point distribution model is built from a set of images collected during artificially sustained articulations of Portuguese European sounds, which can extract the main characteristics of the motion of the tongue. The model built in this work allows under standing more clearly the dynamic speech events involved during sustained articulations. The tongue shape model built can also be useful for speech rehabilitation purposes, specifically to recognize the compensatory movements of the articulators during speech production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the Pseudo phase plane (PPP) method for detecting the existence of a nanofilm on the nitroazobenzene-modified glassy carbon electrode (NAB-GC) system. This modified electrode systems and nitroazobenze-nanofilm were prepared by the electrochemical reduction of diazonium salt of NAB at the glassy carbon electrodes (GCE) in nonaqueous media. The IR spectra of the bare glassy carbon electrodes (GCE), the NAB-GC electrode system and the organic NAB film were recorded. The IR data of the bare GC, NAB-GC and NAB film were categorized into five series consisting of FILM1, GC-NAB1, GC1; FILM2, GC-NAB2, GC2; FILM3, GC-NAB3, GC3 and FILM4, GC-NAB4, GC4 respectively. The PPP approach was applied to each group of the data of unmodified and modified electrode systems with nanofilm. The results provided by PPP method show the existence of the NAB film on the modified GC electrode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drilling of composites plates normally uses traditional techniques but damage risk is high. NDT use is important. Damage in a carbon/epoxy plate is evaluated by enhanced X-rays. Four different drills are used. The images are analysed using Computational Vision techniques. Surface roughness is compared. Results suggest strategies for delamination reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drilling of carbon fibre/epoxy laminates is usually carried out using standard drills. However, it is necessary to adapt the processes and/or tooling as the risk of delamination, or other damages, is high. These problems can affect mechanical properties of produced parts, therefore, lower reliability. In this paper, four different drills – three commercial and a special step (prototype) – are compared in terms of thrust force during drilling and delamination. In order to evaluate damage, enhanced radiography is applied. The resulting images were then computational processed using a previously developed image processing and analysis platform. Results show that the prototype drill had encouraging results in terms of maximum thrust force and delamination reduction. Furthermore, it is possible to state that a correct choice of drill geometry, particularly the use of a pilot hole, a conservative cutting speed – 53 m/min – and a low feed rate – 0.025 mm/rev – can help to prevent delamination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

6th Real-Time Scheduling Open Problems Seminar (RTSOPS 2015), Lund, Sweden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies multidimensional scaling techniques and Fourier transform for visualizing possible time-varying correlations between 25 stock market values. The method is useful for observing clusters of stock markets with similar behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.