911 resultados para Case Based Computing
Resumo:
MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.
Resumo:
More than 60% of neuroendocrine tumours, also called carcinoids, are localised within the gastrointestinal tract. Small bowel neuroendocrine tumours have been diagnosed with increasing frequency over the past 35 years, being the second most frequent tumours of the small intestine. Ileal neuroendocrine tumours diagnosis is late because patients have non-specific symptoms. We have proposed to illustrate as an example the case of a patient, and on its basis, to make a brief review of the literature on small bowel neuroendocrine tumours, resuming several recent changes in the field, concerning classification criteria of these tumours and new recommendations and current advances in diagnosis and treatment. This patient came to our emergency department with a complete bowel obstruction, along with a 2-year history of peristaltic abdominal pain, vomits and diarrhoea episodes. During emergency laparotomy, an ileal stricture was observed, that showed to be a neuroendocrine tumour of the small bowel.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Työn tavoitteena on selvittää mitkä ovat tärkeimmät aineettomat resurssit, joita tarvitaan teollisuuksien risteyskohdassa tapahtuvassa tuotekehityksessä. Teollisuuksien risteyskohdissa syntyvät tuotteet ovat usein radikaaleja, mikä tekee tuotteista mielenkiintoisia, paljon liiketoimintapotentiaalia tarjoavia. Tämä tutkimus lähestyy tuotekehitystä resurssipohjaisesta näkökulmasta. Myös tietämyspohjaista ja suhdepohjaista näkemystä hyödynnetään korostamaan keskittymistä aineettomiin resursseihin. Tutkimuksessa rakennetaan viitekehys, jossa tutkitaan eri resurssikategorioita. Valitut kategoriat ovat teknologiset, markkinointi-, johtamiseen ja hallinnointiin liittyvät ja suhdepohjaiset resurssit. Empiirisessä osassa tutkitaan kahta uutta tuotekonseptia, jotka ovat syntyneet teollisuuksien risteyskohdissa. Empiirisen osan tavoitteena on määritellä tutkimuksen kohteena olevia alustavia tuotekonsepteja tarkemmin ja selvittää millaisia resursseja näiden toteuttamiseen tarvitaan. Myös tarvittavien resurssien nykytila selvitetään ja pohditaan tulisiko puuttuvia resursseja kehittää yrityksen sisällä vai hankkia ne ulkopuolelta. Tutkimus toteutettiin asiantuntijahaastatteluin. Kahden tapaustutkimuksen perusteella näyttäisi siltä, että suhdepohjaiset resurssit ovat erittäin tärkeitä teollisuuksien risteyskohdissa tapahtuvassa tuotekehityksessä. Myös teknologiset resurssit ovat tärkeitä. Markkinointiresurssien tärkeys riippuu lopullisesta tuotekonseptista, kun taas johtamiseen ja kehittämiseen liittyvät resurssit ovat tärkeitänäiden konseptien luomisessa.
Resumo:
Productivity and profitability are important concepts and measures describing the performance and success of a firm. We know that increase in productivity decreases the costs per unit produced and leads to better profitability. This common knowledge is not, however, enough in the modern business environment. Productivity improvement is one means among others for increasing the profitability of actions. There are many means to increase productivity. The use of these means presupposes operative decisions and these decisions presuppose informationabout the effects of these means. Productivity improvement actions are in general made at floor level with machines, cells, activities and human beings. Profitability is most meaningful at the level of the whole firm. It has been very difficult or even impossible to analyze closely enough the economical aspects of thechanges at floor level with the traditional costing systems. New ideas in accounting have only recently brought in elements which make it possible to considerthese phenomena where they actually happen. The aim of this study is to supportthe selection of objects to productivity improvement, and to develop a method to analyze the effects of the productivity change in an activity on the profitability of a firm. A framework for systemizing the economical management of productivity improvement is developed in this study. This framework is a systematical way with two stages to analyze the effects of productivity improvement actions inan activity on the profitability of a firm. At the first stage of the framework, a simple selection method which is based on the worth, possibility and the necessity of the improvement actions in each activity is presented. This method is called Urgency Analysis. In the second stage it is analyzed how much a certain change of productivity in an activity affects the profitability of a firm. A theoretical calculation model with which it is possible to analyze the effects of a productivity improvement in monetary values is presented. On the basis of this theoretical model a tool is made for the analysis at the firm level. The usefulness of this framework was empirically tested with the data of the profit center of one medium size Finnish firm which operates in metal industry. It is expressedthat the framework provides valuable information about the economical effects of productivity improvement for supporting the management in their decision making.
Resumo:
This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.
Resumo:
Background: Protein domains represent the basic units in the evolution of proteins. Domain duplication and shuffling by recombination and fusion, followed by divergence are the most common mechanisms in this process. Such domain fusion and recombination events are predicted to occur only once for a given multidomain architecture. However, other scenarios may be relevant in the evolution of specific proteins, such as convergent evolution of multidomain architectures. With this in mind, we study glutaredoxin (GRX) domains, because these domains of approximately one hundred amino acids are widespread in archaea, bacteria and eukaryotes and participate in fusion proteins. GRXs are responsible for the reduction of protein disulfides or glutathione-protein mixed disulfides and are involved in cellular redox regulation, although their specific roles and targets are often unclear. Results: In this work we analyze the distribution and evolution of GRX proteins in archaea,bacteria and eukaryotes. We study over one thousand GRX proteins, each containing at least one GRX domain, from hundreds of different organisms and trace the origin and evolution of the GRX domain within the tree of life. Conclusion: Our results suggest that single domain GRX proteins of the CGFS and CPYC classes have, each, evolved through duplication and divergence from one initial gene that was present in the last common ancestor of all organisms. Remarkably, we identify a case of convergent evolution in domain architecture that involves the GRX domain. Two independent recombination events of a TRX domain to a GRX domain are likely to have occurred, which is an exception to the dominant mechanism of domain architecture evolution.
Resumo:
Tässä diplomityössä perehdytään WAP:in Push -viitekehykseen. WAP-standardit määrittelevät kuinka Internet-tyyppisiä palveluita, joita voidaan käyttää erilaisia mobiileja päätelaiteitteita käyttäen, toteutetaan tehokkaalla ja verkkoteknologiasta riippumattomalla tavalla. WAP pohjautuu Internet:iin, mutta huomioi pienten päätelaitteiden ja mobiiliverkkojen rajoitukset ja erikoisominaisuudet. WAP Push viitekehys määrittelee verkon aloittaman palvelusisällön toimittamisen. Työn teoriaosassa käydään läpi yleinen WAP-arkkitehtuuri ja WAP-protokollapino käyttäen vertailukohtina lanka-Internetin arkkitehtuuria ja protokollapinoa. Edellistä perustana käyttäen tutustaan WAP Push -viitekehykseen. Käytännönosassa kuvataan WAP Push -välityspalvelimen suunnittelu ja kehitystyö. WAP Push -välityspalvelin on keskeinen verkkoelementti WAP Push -viitekehyksessä. WAP Push -välityspalvelin yhdistää Internetin ja mobiiliverkon tavalla, joka piilottaa teknologiaeroavaisuudet Internetissä olevalta palveluntuottajalta.
Resumo:
Diplomityön tavoitteena oli arvioida sähköisen oppimisen soveltuvuutta kohdeyrityksessä ja selvittää, voidaanko luokkahuonekoulutusta korvata sähköisen oppimisen kursseilla. Tietojärjestelmän raportoinnista tehtiin sähköisen oppimisen kurssi, joka oli koekäytössä. Koekäytön jälkeen tehtiin käyttäjäkysely, kerättiin käyttötietoja kurssista ja tehtiin haastatteluja. Koekäyttäjien kokemuksista tehdyn arvioinnin perusteella sähköinen oppiminen soveltuu käytettäväksi selkeiden asioiden koulutukseen kohdeyrityksessä, mutta se ei voi kokonaan korvata luokkahuonekoulutusta. Luokkahuonekoulutuksessa tulisi keskittyä monimutkaisempiin asioihin ja ongelmanratkaisuun. Positiivisten tulosten perusteella sähköisen oppimisen kehittämistä päätettiin jatkaa yrityksessä. Sähköisen oppimisen kurssin avulla saadaan kustannussäästöjä kohdeyrityksessä, kun käyttäjämäärä on suurempi kuin 66. Jos koko koekäytössä olleen kurssin kohdeyleisö suorittaa kurssin sähköisesti, ovat kustannukset vain noin 15% vastaavista kustannuksista luokkahuoneessa järjestettynä. Lisäksi sähköisen oppimisen tehokkuutta tutkittiin ja koekäytössä olleen kurssin arvioitiin olevan positiivinen työssä kehitetyn Consensus-mallin mukaan.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
The purpose of our project is to contribute to earlier diagnosis of AD and better estimates of its severity by using automatic analysis performed through new biomarkers extracted from non-invasive intelligent methods. The methods selected in this case are speech biomarkers oriented to Sponta-neous Speech and Emotional Response Analysis. Thus the main goal of the present work is feature search in Spontaneous Speech oriented to pre-clinical evaluation for the definition of test for AD diagnosis by One-class classifier. One-class classifi-cation problem differs from multi-class classifier in one essen-tial aspect. In one-class classification it is assumed that only information of one of the classes, the target class, is available. In this work we explore the problem of imbalanced datasets that is particularly crucial in applications where the goal is to maximize recognition of the minority class as in medical diag-nosis. The use of information about outlier and Fractal Dimen-sion features improves the system performance.