927 resultados para implementation method
Resumo:
A chaotic encryption algorithm is proposed based on the "Life-like" cellular automata (CA), which acts as a pseudo-random generator (PRNG). The paper main focus is to use chaos theory to cryptography. Thus, CA was explored to look for this "chaos" property. This way, the manuscript is more concerning on tests like: Lyapunov exponent, Entropy and Hamming distance to measure the chaos in CA, as well as statistic analysis like DIEHARD and ENT suites. Our results achieved higher randomness quality than others ciphers in literature. These results reinforce the supposition of a strong relationship between chaos and the randomness quality. Thus, the "chaos" property of CA is a good reason to be employed in cryptography, furthermore, for its simplicity, low cost of implementation and respectable encryption power. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
In this report it was designed an innovative satellite-based monitoring approach applied on the Iraqi Marshlands to survey the extent and distribution of marshland re-flooding and assess the development of wetland vegetation cover. The study, conducted in collaboration with MEEO Srl , makes use of images collected from the sensor (A)ATSR onboard ESA ENVISAT Satellite to collect data at multi-temporal scales and an analysis was adopted to observe the evolution of marshland re-flooding. The methodology uses a multi-temporal pixel-based approach based on classification maps produced by the classification tool SOIL MAPPER ®. The catalogue of the classification maps is available as web service through the Service Support Environment Portal (SSE, supported by ESA). The inundation of the Iraqi marshlands, which has been continuous since April 2003, is characterized by a high degree of variability, ad-hoc interventions and uncertainty. Given the security constraints and vastness of the Iraqi marshlands, as well as cost-effectiveness considerations, satellite remote sensing was the only viable tool to observe the changes taking place on a continuous basis. The proposed system (ALCS – AATSR LAND CLASSIFICATION SYSTEM) avoids the direct use of the (A)ATSR images and foresees the application of LULCC evolution models directly to „stock‟ of classified maps. This approach is made possible by the availability of a 13 year classified image database, conceived and implemented in the CARD project (http://earth.esa.int/rtd/Projects/#CARD).The approach here presented evolves toward an innovative, efficient and fast method to exploit the potentiality of multi-temporal LULCC analysis of (A)ATSR images. The two main objectives of this work are both linked to a sort of assessment: the first is to assessing the ability of modeling with the web-application ALCS using image-based AATSR classified with SOIL MAPPER ® and the second is to evaluate the magnitude, the character and the extension of wetland rehabilitation.
Resumo:
[EN]This Ph.D. thesis presents a general, robust methodology that may cover any type of 2D acoustic optimization problem. A procedure involving the coupling of Boundary Elements (BE) and Evolutionary Algorithms is proposed for systematic geometric modifications of road barriers that lead to designs with ever-increasing screening performance. Numerical simulations involving single- and multi-objective optimizations of noise barriers of varied nature are included in this document. results disclosed justify the implementation of this methodology by leading to optimal solutions of previously defined topologies that, in general, greatly outperform the acoustic efficiency of classical, widely used barrier designs normally erected near roads.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.
Resumo:
In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.
Resumo:
Background Increasing attention is being paid to improvement in undergraduate science, technology, engineering, and mathematics (STEM) education through increased adoption of research-based instructional strategies (RBIS), but high-quality measures of faculty instructional practice do not exist to monitor progress. Purpose/Hypothesis The measure of how well an implemented intervention follows the original is called fidelity of implementation. This theory was used to address the research questions: What is the fidelity of implementation of selected RBIS in engineering science courses? That is, how closely does engineering science classroom practice reflect the intentions of the original developers? Do the critical components that characterize an RBIS discriminate between engineering science faculty members who claimed use of the RBIS and those who did not? Design/Method A survey of 387 U.S. faculty teaching engineering science courses (e.g., statics, circuits, thermodynamics) included questions about class time spent on 16 critical components and use of 11 corresponding RBIS. Fidelity was quantified as the percentage of RBIS users who also spent time on corresponding critical components. Discrimination between users and nonusers was tested using chi square. Results Overall fidelity of the 11 RBIS ranged from 11% to 80% of users spending time on all required components. Fidelity was highest for RBIS with one required component: case-based teaching, just-in-time teaching, and inquiry learning. Thirteen of 16 critical components discriminated between users and nonusers for all RBIS to which they were mapped. Conclusions Results were consistent with initial mapping of critical components to RBIS. Fidelity of implementation is a potentially useful framework for future work in STEM undergraduate education.
Resumo:
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Resumo:
PURPOSE: Study of behavior and influence of a multileaf collimator (MLC) on dose calculation, verification, and portal energy spectra in the case of intensity-modulated fields obtained with a step-and-shoot or a dynamic technique. METHODS: The 80-leaf MLC for the Varian Clinac 2300 C/D was implemented in a previously developed Monte Carlo (MC) based multiple source model (MSM) for a 6 MV photon beam. Using this model and the MC program GEANT, dose distributions, energy fluence maps and energy spectra at different portal planes were calculated for three different MLC applications. RESULTS: The comparison of MC-calculated dose distributions in the phantom and portal plane, with those measured with films showed an agreement within 3% and 1.5 mm for all cases studied. The deviations mainly occur in the extremes of the intensity modulation. The MC method allows to investigate, among other aspects, dose components, energy fluence maps, tongue-and-groove effects and energy spectra at portal planes. CONCLUSION: The MSM together with the implementation of the MLC is appropriate for a number of investigations in intensity-modulated radiation therapy (IMRT).
Resumo:
This work presents an innovative integration of sensing and nano-scaled fluidic actuation in the combination of pH sensitive optical dye immobilization with the electro-osmotic phenomena in polar solvents like water for flow-through pH measurements. These flow-through measurements are performed in a flow-through sensing device (FTSD) configuration that is designed and fabricated at MTU. A relatively novel and interesting material, through-wafer mesoporous silica substrates with pore diameters of 20 -200 nm and pore depths of 500 µm are fabricated and implemented for electro-osmotic pumping and flow-through fluorescence sensing for the first time. Performance characteristics of macroporous silicon (> 500 µm) implemented for electro-osmotic pumping include, a very large flow effciency of 19.8 µLmin-1V-1 cm-2 and maximum pressure effciency of 86.6 Pa/V in comparison to mesoporous silica membranes with 2.8 µLmin-1V-1cm-2 flow effciency and a 92 Pa/V pressure effciency. The electrical current (I) of the EOP system for 60 V applied voltage utilizing macroporous silicon membranes is 1.02 x 10-6A with a power consumption of 61.74 x 10-6 watts. Optical measurements on mesoporous silica are performed spectroscopically from 300 nm to 1000 nm using ellipsometry, which includes, angularly resolved transmission and angularly resolved reflection measurements that extend into the infrared regime. Refractive index (n) values for oxidized and un-oxidized mesoporous silicon sample at 1000 nm are found to be 1.36 and 1.66. Fluorescence results and characterization confirm the successful pH measurement from ratiometric techniques. The sensitivity measured for fluorescein in buffer solution is 0.51 a.u./pH compared to sensitivity of ~ 0.2 a.u./pH in the case of fluorescein in porous silica template. Porous silica membranes are efficient templates for immobilization of optical dyes and represent a promising method to increase sensitivity for small variations in chemical properties. The FTSD represents a device topology suitable for application to long term monitoring of lakes and reservoirs. Unique and important contributions from this work include fabrication of a through-wafer mesoporous silica membrane that has been thoroughly characterized optically using ellipsometry. Mesoporous silica membranes are tested as a porous media in an electro-osmotic pump for generating high pressure capacities due to the nanometer pore sizes of the porous media. Further, dye immobilized mesoporous silica membranes along with macroporous silicon substrates are implemented for continuous pH measurements using fluorescence changes in a flow-through sensing device configuration. This novel integration and demonstration is completely based on silicon and implemented for the first time and can lead to miniaturized flow-through sensing systems based on MEMS technologies.
Resumo:
BACKGROUND: Neurally adjusted ventilatory assist (NAVA) delivers assist in proportion to the patient's respiratory drive as reflected by the diaphragm electrical activity (EAdi). We examined to what extent NAVA can unload inspiratory muscles, and whether unloading is sustainable when implementing a NAVA level identified as adequate (NAVAal) during a titration procedure. METHODS: Fifteen adult, critically ill patients with a Pao(2)/fraction of inspired oxygen (Fio(2)) ratio < 300 mm Hg were studied. NAVAal was identified based on the change from a steep increase to a less steep increase in airway pressure (Paw) and tidal volume (Vt) in response to systematically increasing the NAVA level from low (NAVAlow) to high (NAVAhigh). NAVAal was implemented for 3 h. RESULTS: At NAVAal, the median esophageal pressure time product (PTPes) and EAdi values were reduced by 47% of NAVAlow (quartiles, 16 to 69% of NAVAlow) and 18% of NAVAlow (quartiles, 15 to 26% of NAVAlow), respectively. At NAVAhigh, PTPes and EAdi values were reduced by 74% of NAVAlow (quartiles, 56 to 86% of NAVAlow) and 36% of NAVAlow (quartiles, 21 to 51% of NAVAlow; p < or = 0.005 for all). Parameters during 3 h on NAVAal were not different from parameters during titration at NAVAal, and were as follows: Vt, 5.9 mL/kg predicted body weight (PBW) [quartiles, 5.4 to 7.2 mL/kg PBW]; respiratory rate (RR), 29 breaths/min (quartiles, 22 to 33 breaths/min); mean inspiratory Paw, 16 cm H(2)O (quartiles, 13 to 20 cm H(2)O); PTPes, 45% of NAVAlow (quartiles, 28 to 57% of NAVAlow); and EAdi, 76% of NAVAlow (quartiles, 63 to 89% of NAVAlow). Pao(2)/Fio(2) ratio, Paco(2), and cardiac performance during NAVAal were unchanged, while Paw and Vt were lower, and RR was higher when compared to conventional ventilation before implementing NAVAal. CONCLUSIONS: Systematically increasing the NAVA level reduces respiratory drive, unloads respiratory muscles, and offers a method to determine an assist level that results in sustained unloading, low Vt, and stable cardiopulmonary function when implemented for 3 h.
Resumo:
Ever since the invention of the internal combustion engine, generating more power and achieving better efficiency has been a major goal for the designers. Variable compression ratio technology is way to achieve those goals. This paper will discuss the method of varying the compression ratio of an inline 4-cylinder engine through the use of a 4-bar linkage and gear mechanism. This mechanism was proven to easily vary the compression ratio of the engine and shows promise of becoming a technology used for future engine designer.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.