959 resultados para Computational algorithm
Resumo:
A computational algorithm (based on Smullyan's analytic tableau method) that varifies whether a given well-formed formula in propositional calculus is a tautology or not has been implemented on a DEC system 10. The stepwise refinement approch of program development used for this implementation forms the subject matter of this paper. The top-down design has resulted in a modular and reliable program package. This computational algoritlhm compares favourably with the algorithm based on the well-known resolution principle used in theorem provers.
Resumo:
In this paper, we present an algorithm for full-wave electromagnetic analysis of nanoplasmonic structures. We use the three-dimensional Method of Moments to solve the electric field integral equation. The computational algorithm is developed in the language C. As examples of application of the code, the problems of scattering from a nanosphere and a rectangular nanorod are analyzed. The calculated characteristics are the near field distribution and the spectral response of these nanoparticles. The convergence of the method for different discretization sizes is also discussed.
Resumo:
Modal analysis is widely approached in the classic theory of power systems modelling. This technique is also applied to model multiconductor transmission lines and their self and mutual electrical parameters. However, this methodology has some particularities and inaccuracies for specific applications, which are not clearly described in the technical literature. This study provides a brief review on modal decoupling applied in transmission line digital models and thereafter a novel and simplified computational routine is proposed to overcome the possible errors embedded by the modal decoupling in the simulation/ modelling computational algorithm. © The Institution of Engineering and Technology 2013.
Resumo:
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
The healing process for bone fractures is sensitive to mechanical stability and blood supply at the fracture site. Most currently available mechanobiological algorithms of bone healing are based solely on mechanical stimuli, while the explicit analysis of revascularization and its influences on the healing process have not been thoroughly investigated in the literature. In this paper, revascularization was described by two separate processes: angiogenesis and nutrition supply. The mathematical models for angiogenesis and nutrition supply have been proposed and integrated into an existing fuzzy algorithm of fracture healing. The computational algorithm of fracture healing, consisting of stress analysis, analyses of angiogenesis and nutrient supply, and tissue differentiation, has been tested on and compared with animal experimental results published previously. The simulation results showed that, for a small and medium-sized fracture gap, the nutrient supply is sufficient for bone healing, for a large fracture gap, non-union may be induced either by deficient nutrient supply or inadequate mechanical conditions. The comparisons with experimental results demonstrated that the improved computational algorithm is able to simulate a broad spectrum of fracture healing cases and to predict and explain delayed unions and non-union induced by large gap sizes and different mechanical conditions. The new algorithm will allow the simulation of more realistic clinical fracture healing cases with various fracture gaps and geometries and may be helpful to optimise implants and methods for fracture fixation.
Resumo:
Microbes in natural and artificial environments as well as in the human body are a key part of the functional properties of these complex systems. The presence or absence of certain microbial taxa is a correlate of functional status like risk of disease or course of metabolic processes of a microbial community. As microbes are highly diverse and mostly notcultivable, molecular markers like gene sequences are a potential basis for detection and identification of key types. The goal of this thesis was to study molecular methods for identification of microbial DNA in order to develop a tool for analysis of environmental and clinical DNA samples. Particular emphasis was placed on specificity of detection which is a major challenge when analyzing complex microbial communities. The approach taken in this study was the application and optimization of enzymatic ligation of DNA probes coupled with microarray read-out for high-throughput microbial profiling. The results show that fungal phylotypes and human papillomavirus genotypes could be accurately identified from pools of PCR amplicons generated from purified sample DNA. Approximately 1 ng/μl of sample DNA was needed for representative PCR amplification as measured by comparisons between clone sequencing and microarray. A minimum of 0,25 amol/μl of PCR amplicons was detectable from amongst 5 ng/μl of background DNA, suggesting that the detection limit of the test comprising of ligation reaction followed by microarray read-out was approximately 0,04%. Detection from sample DNA directly was shown to be feasible with probes forming a circular molecule upon ligation followed by PCR amplification of the probe. In this approach, the minimum detectable relative amount of target genome was found to be 1% of all genomes in the sample as estimated from 454 deep sequencing results. Signal-to-noise of contact printed microarrays could be improved by using an internal microarray hybridization control oligonucleotide probe together with a computational algorithm. The algorithm was based on identification of a bias in the microarray data and correction of the bias as shown by simulated and real data. The results further suggest semiquantitative detection to be possible by ligation detection, allowing estimation of target abundance in a sample. However, in practise, comprehensive sequence information of full length rRNA genes is needed to support probe design with complex samples. This study shows that DNA microarray has the potential for an accurate microbial diagnostic platform to take advantage of increasing sequence data and to replace traditional, less efficient methods that still dominate routine testing in laboratories. The data suggests that ligation reaction based microarray assay can be optimized to a degree that allows good signal-tonoise and semiquantitative detection.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.
The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.
Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.
In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.
In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.
Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.
Resumo:
Humans have exceptional abilities to learn new skills, manipulate tools and objects, and interact with our environment. In order to be successful at these tasks, our brain has developed learning mechanisms to deal with and compensate for the constantly changing dynamics of the world. If this mechanism or mechanisms can be understood from a computational point of view, then they can also be used to drive the adaptability and learning of robots. In this paper, we will present a new technique for examining changes in the feedforward motor command due to adaptation. This technique can then be utilized for examining motor adaptation in humans and determining a computational algorithm which explains motor learning. © 2007.
Resumo:
A newly introduced inverse class-E power amplifier (PA) was designed, simulated, fabricated, and characterized. The PA operated at 2.26 GHz and delivered 20.4-dBm output power with peak drain efficiency (DE) of 65% and power gain of 12 dB. Broadband performance was achieved across a 300-Mitz bandwidth with DE of better than 50% and 1-dB output-power flatness. The concept of enhanced injection predistortion with a capability to selectively suppress unwanted sub-frequency components and hence suitable for memory effects minimization is described coupled with a new technique that facilitates an accurate measurement of the phase of the third-order intermodulation (IM3) products. A robust iterative computational algorithm proposed in this paper dispenses with the need for manual tuning of amplitude and phase of the IM3 injected signals as commonly employed in the previous publications. The constructed inverse class-E PA was subjected to a nonconstant envelope 16 quadrature amplitude modulation signal and was linearized using combined lookup table (LUT) and enhanced injection technique from which superior properties from each technique can be simultaneously adopted. The proposed method resulted in 0.7% measured error vector magnitude (in rms) and 34-dB adjacent channel leakage power ratio improvement, which was 10 dB better than that achieved using the LUT predistortion alone.
Resumo:
The problem of robust pole assignment by feedback in a linear, multivariable, time-invariant system which is subject to structured perturbations is investigated. A measure of robustness, or sensitivity, of the poles to a given class of perturbations is derived, and a reliable and efficient computational algorithm is presented for constructing a feedback which assigns the prescribed poles and optimizes the robustness measure.
Resumo:
For linear multivariable time-invariant continuous or discrete-time singular systems it is customary to use a proportional feedback control in order to achieve a desired closed loop behaviour. Derivative feedback is rarely considered. This paper examines how derivative feedback in descriptor systems can be used to alter the structure of the system pencil under various controllability conditions. It is shown that derivative and proportional feedback controls can be constructed such that the closed loop system has a given form and is also regular and has index at most 1. This property ensures the solvability of the resulting system of dynamic-algebraic equations. The construction procedures used to establish the theory are based only on orthogonal matrix decompositions and can therefore be implemented in a numerically stable way. The problem of pole placement with derivative feedback alone and in combination with proportional state feedback is also investigated. A computational algorithm for improving the “conditioning” of the regularized closed loop system is derived.