960 resultados para Computer software - Development
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
The increased prevalence of iron deficiency among infants can be attributed to the consumption of an iron deficient diet or a diet that interferes with iron absorption at the critical time of infancy, among other factors. The gradual shift from breast milk to other foods and liquids is a transition period which greatly contributes to iron deficiency anaemia (IDA). The purpose of this research was to assess iron deficiency anaemia among infants aged six to nine months in Keiyo South Sub County. The specific objectives of this study were: to establish the prevalence of infants with iron deficiency anaemia and dietary iron intake among infants aged 6 to 9 months. The cross sectional study design was adopted in this survey. This study was conducted in three health facilities in Keiyo South Sub County. The infants were selected by use of a two stage cluster sampling procedure. Systematic random sampling was then used to select a total of 244 mothers and their infants. Eighty two (82) infants were selected from Kamwosor sub-district hospital and eighty one (81) from both Nyaru and Chepkorio health facilities. Interview schedules, 24-hour dietary recall and food frequency questionnaires were used for collection of dietary iron intake. Biochemical tests were carried out by use of the Hemo-control photochrometer at the health facilities. Infants whose hemoglobin levels were less than 11g/dl were considered anaemic. Further, peripheral blood smears were conducted to ascertain the type of nutritional anaemia. Data was analyzed using the Statistical Package for Social Sciences (SPSS) computer software version 17, 2009. Dietary iron intake was analyzed using the NutriSurvey 2007 computer software. Results indicated that the mean hemoglobin values were 11.3± 0.84 g/dl. Twenty one percent (21.7%) of the infants had anaemia and further 100% of peripheral blood smears indicated iron deficiency anaemia. Dietary iron intake was a predictor of iron deficiency anaemia in this study (t=-3.138; p=0.01). Iron deficiency anaemia was evident among infants in Keiyo South Sub County. The Ministry of Health should formulate and implement policies on screening for anaemia and ensure intensive nutrition education on iron rich diets during child welfare clinics.
Resumo:
Doutoramento em Gestão
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
Limiti sempre più stringenti sulle emissioni inquinanti ed una maggiore attenzione ai consumi, all'incremento di prestazioni e alla guidabilità, portano allo sviluppo di algoritmi di controllo motore sempre più complicati. Allo stesso tempo, l'unità di propulsione sta diventando un insieme sempre più variegato di sottosistemi che devono lavorare all'unisono. L'ingegnere calibratore si trova di fronte ad una moltitudine di variabili ed algoritmi che devono essere calibrati e testati e necessita di strumenti che lo aiutino ad analizzare il comportamento del motore fornendo risultati sintetici e facilmente accessibili. Nel seguente lavoro è riportato lo sviluppo di un sistema di analisi della combustione: l'obbiettivo è stato quello di sviluppare un software che fornisca le migliori soluzioni per l'analisi di un motore a combustione interna, in termini di accuratezza dei risultati, varietà di calcoli messi a disposizione, facilità di utilizzo ed integrazione con altri sistemi tramite la condivisione dei risultati calcolati in tempo reale.
Resumo:
Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.
Resumo:
Obesity has been recognized as a worldwide public health problem. It significantly increases the chances of developing several diseases, including Type II diabetes. The roles of insulin and leptin in obesity involve reactions that can be better understood when they are presented step by step. The aim of this work was to design software with data from some of the most recent publications on obesity, especially those concerning the roles of insulin and leptin in this metabolic disturbance. The most notable characteristic of this software is the use of animations representing the cellular response together with the presentation of recently discovered mechanisms on the participation of insulin and leptin in processes leading to obesity. The software was field tested in the Biochemistry of Nutrition web-based course. After using the software and discussing its contents in chatrooms, students were asked to answer an evaluation survey about the whole activity and the usefulness of the software within the learning process. The teaching assistants (TA) evaluated the software as a tool to help in the teaching process. The students' and TAs' satisfaction was very evident and encouraged us to move forward with the software development and to improve the use of this kind of educational tool in biochemistry classes.
Resumo:
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombination-detecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of post-recombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research
Resumo:
For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares the performances of several digital tools with traditional library resources. While new specialised search engines and open access digital repositories may fill a gap between conventional search engines and traditional references, these should be not be confused with real libraries and international scientific databases that encompass textbooks and peer-reviewed scholarly works. An absence of listing in some Internet search listings, databases and repositories is not an indication of standing. Researchers, engineers and academics should remember these key differences in assessing the quality of bibliographic "research" based solely upon Internet searches.
Resumo:
The demand for more pixels is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers a solution to augment the pixel capacity of a display. However, problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. We present the results obtained on a desktop-size tiled projector array of three D-ILA projectors sharing a common illumination source. A short throw lens (0.8:1) on each projector yields a 21-in. diagonal for each image tile; the composite image on a 3×1 array is 3840×1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact, and can fit in a normal room or laboratory. The projectors are mounted on precision six-axis positioners, which allow pixel level alignment. A fiber optic beamsplitting system and a single set of red, green, and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted separately to set or change characteristics such as contrast, brightness, or gamma curves. The projectors were then matched carefully: photometric variations were corrected, leading to a seamless image. Photometric measurements were performed to characterize the display and are reported here. This system is driven by a small PC cluster fitted with graphics cards and running Linux. It can be scaled to accommodate an array of 2×3 or 3×3 projectors, thus increasing the number of pixels of the final image. Finally, we present current uses of the display in fields such as astrophysics and archaeology (remote sensing).
Resumo:
This paper reviews the potential use of three types of spatial technology to land managers, namely satellite imagery, satellite positioning systems and supporting computer software. Developments in remote sensing and the relative advantages of multispectral and hyperspectral images are discussed. The main challenge to the wider use of remote sensing as a land management tool is seen as uncertainty whether apparent relationships between biophysical variables and spectral reflectance are direct and causal, or artefacts of particular images. Developments in satellite positioning systems are presented in the context of land managers’ need for position estimates in situations where absolute precision may or may not be required. The role of computer software in supporting developments in spatial technology is described. Spatial technologies are seen as having matured beyond empirical applications to the stage where they are useful and reliable land management tools. In addition, computer software has become more user-friendly and this has facilitated data collection and manipulation by semi-expert as well as specialist staff.