21 resultados para Application specific algorithm
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
Tehoelektoniikkalaitteella tarkoitetaan ohjaus- ja säätöjärjestelmää, jolla sähköä muokataan saatavilla olevasta muodosta haluttuun uuteen muotoon ja samalla hallitaan sähköisen tehon virtausta lähteestä käyttökohteeseen. Tämä siis eroaa signaalielektroniikasta, jossa sähköllä tyypillisesti siirretään tietoa hyödyntäen eri tiloja. Tehoelektroniikkalaitteita vertailtaessa katsotaan yleensä niiden luotettavuutta, kokoa, tehokkuutta, säätötarkkuutta ja tietysti hintaa. Tyypillisiä tehoelektroniikkalaitteita ovat taajuudenmuuttajat, UPS (Uninterruptible Power Supply) -laitteet, hitsauskoneet, induktiokuumentimet sekä erilaiset teholähteet. Perinteisesti näiden laitteiden ohjaus toteutetaan käyttäen mikroprosessoreja, ASIC- (Application Specific Integrated Circuit) tai IC (Intergrated Circuit) -piirejä sekä analogisia säätimiä. Tässä tutkimuksessa on analysoitu FPGA (Field Programmable Gate Array) -piirien soveltuvuutta tehoelektroniikan ohjaukseen. FPGA-piirien rakenne muodostuu erilaisista loogisista elementeistä ja niiden välisistä yhdysjohdoista.Loogiset elementit ovat porttipiirejä ja kiikkuja. Yhdysjohdot ja loogiset elementit ovat piirissä kiinteitä eikä koostumusta tai lukumäärää voi jälkikäteen muuttaa. Ohjelmoitavuus syntyy elementtien välisistä liitännöistä. Piirissä on lukuisia, jopa miljoonia kytkimiä, joiden asento voidaan asettaa. Siten piirin peruselementeistä voidaan muodostaa lukematon määrä erilaisia toiminnallisia kokonaisuuksia. FPGA-piirejä on pitkään käytetty kommunikointialan tuotteissa ja siksi niiden kehitys on viime vuosina ollut nopeaa. Samalla hinnat ovat pudonneet. Tästä johtuen FPGA-piiristä on tullut kiinnostava vaihtoehto myös tehoelektroniikkalaitteiden ohjaukseen. Väitöstyössä FPGA-piirien käytön soveltuvuutta on tutkittu käyttäen kahta vaativaa ja erilaista käytännön tehoelektroniikkalaitetta: taajuudenmuuttajaa ja hitsauskonetta. Molempiin testikohteisiin rakennettiin alan suomalaisten teollisuusyritysten kanssa soveltuvat prototyypit,joiden ohjauselektroniikka muutettiin FPGA-pohjaiseksi. Lisäksi kehitettiin tätä uutta tekniikkaa hyödyntävät uudentyyppiset ohjausmenetelmät. Prototyyppien toimivuutta verrattiin vastaaviin perinteisillä menetelmillä ohjattuihin kaupallisiin tuotteisiin ja havaittiin FPGA-piirien mahdollistaman rinnakkaisen laskennantuomat edut molempien tehoelektroniikkalaitteiden toimivuudessa. Työssä on myösesitetty uusia menetelmiä ja työkaluja FPGA-pohjaisen säätöjärjestelmän kehitykseen ja testaukseen. Esitetyillä menetelmillä tuotteiden kehitys saadaan mahdollisimman nopeaksi ja tehokkaaksi. Lisäksi työssä on kehitetty FPGA:n sisäinen ohjaus- ja kommunikointiväylärakenne, joka palvelee tehoelektroniikkalaitteiden ohjaussovelluksia. Uusi kommunikointirakenne edistää lisäksi jo tehtyjen osajärjestelmien uudelleen käytettävyyttä tulevissa sovelluksissa ja tuotesukupolvissa.
Resumo:
Reusability has become more popular factor in modern software engineering. This is mainly because object-orientation has brought methods that allow reusing more easily. Today more and more application developer thinks how they can reuse already existing applications in their work. If the developer wants to use existing components outside the current project, he can use design patterns, class libraries or frameworks. These provide solution for specific or general problems that has been already encountered. Application frameworks are collection of classes that provides base for the developer. Application frameworks are mostly implementation phase tools, but can also be used in application design. The main purpose of the frameworks is separate domain specific functionalities from the application specific. Usually the frameworks are divided into two categories: black and white box. Difference between those categories is the way the reuse is done. The application frameworks provide properties that can be examined and compared between different frameworks. These properties are: extensibility, reusability, modularity and scalability. These examine how framework will handle different platforms, changes in framework, increasing demand for resources, etc. Generally application frameworks do have these properties in good level. When comparing general purpose framework and more specific purpose framework, the main difference can be located in reusability of frameworks. It is mainly because the framework designed to specific domain can have constraints from external systems and resources. With general purpose framework these are set by the application developed based on the framework.
Resumo:
Mikropiirien valmistus- ja suunnittelutekniikoiden kehittyminen mahdollistaa yhä monimutkaisempien mikropiirien valmistamisen. Piirien verifioinnista onkin tullut prosessin aikaa vievin osa,sillä kompleksisuuden kasvaessa kasvaa verifioinnin tarve eksponentiaalisesti. Vaikka erinäisiä strategioita piirien integroinnin verifiointiin on esitetty, mm. verifioinnin jakaminen koko suunnitteluprosessin ajalle, jopa yli puolet koko piirin suunnitteluun ja valmistukseen käytetystä työmäärästä kuluu verifiointiin. Uudelleenkäytettävät komponentit ovat pääosassa piirin suunnittelussa, mutta verifioinnissa uudelleenkäytettävyyttä ei ole otettu kunnolla käyttöön ainakaan verifiointiohjelmistojen osalta. Tämä diplomityö esittelee uudelleenkäytettävän mikropiirien verifiointiohjelmistoarkkitehtuurin, jolla saadaan verifiointitaakkaa vähennettyä poistamalla verifioinnissa käytettävien ohjelmistojen uudelleensuunnittelun ja toteuttamisen tarvetta.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Tämän tutkimuksen tavoitteena oli laatia tarkastelumalli, jonka perusteella pystyttäisiin analysoimaan tutkimuksessa tarkasteltavan kaukolämpöjohdon lämpöhäviöiden talteenot-toratkaisun taloudellista kannattavuutta yleisellä tasolla sekä sen mahdollisissa sovellus-kohteissa. Työssä tarkastellaan kaukolämpöjohtoa, jonka sisään on sijoitettu lämmönke-ruuputki. Lämmönkeruuputken on tarkoitus kerätä lämpöä kaukolämpöjohdon vaipasta sekä pitää vaipan lämpötilaa ympäristön lämpötilaa matalampana, jolloin johdon ulkopuo-lisia lämpöhäviöitä ei synny. Tarkastelumalli laadittiin perustuen lämpöpumppuprosessin ja kaukolämpöverkoston yleisiin mitoitusperiaatteisiin sekä ratkaisuun liittyvien järjestelmien osalta kerättyihin tarkas-teluhetkeä edustaviin kustannustietoihin. Tarkastelumallista laadittiin Excel-laskentataulukkona, jota voidaan tulevaisuudessa soveltaa järjestelmän sovelluskohdekoh-taiseen tarkasteluun sekä mitoitukseen. Lasketut takaisinmaksuajat osoittautuivat kaikissa tarkastelluissa tapauksissa järjestelmien arvioitua teknistä käyttöikää lyhyemmäksi. Järjestelmällä voisi olla tietynlaisissa sovellus-kohteissa myös strateginen, kaukolämpöliiketoiminnan riskejä vähentävä merkitys.
Resumo:
Tartraatti-resistentin happaman fosfataasin hiljentäminen RNAi menetelmällä: odottamaton vaikutus monosyytti-makrofagi linjan soluissa RNA interferenssi (RNAi) eli RNA:n hiljentyminen löydettiin ensimmäisenä kasveissa, ja 2000-luvulla RNAi menetelmä on otettu käyttöön myös nisäkässoluissa. RNAi on mekanismi, jossa lyhyet kaksi juosteiset RNA molekyylit eli siRNA:t sitoutuvat proteiinikompleksiin ja sitoutuvat komplementaarisesti proteiinia koodaavaan lähetti RNA:han katalysoiden lähetti RNA:n hajoamisen. Tällöin RNA:n koodaamaa proteiinia ei solussa tuoteta. Tässä työssä on RNA interferenssi menetelmän avuksi kehitetty uusi siRNA molekyylien suunnittelualgoritmi siRNA_profile, joka etsii lähetti RNA:sta geenin hiljentämiseen sopivia kohdealueita. Optimaalisesti suunnitellulla siRNA molekyylillä voi olla mahdollista saavuttaa pitkäaikainen geenin hiljeneminen ja spesifinen kohdeproteiinin määrän aleneminen solussa. Erilaiset kemialliset modifikaatiot, mm. 2´-Fluoro-modifikaatio, siRNA molekyylin riboosirenkaassa lisäsivät siRNA molekyylin stabiilisuutta veren plasmassa sekä siRNA molekyylin tehokkuutta. Nämä ovat tärkeitä siRNA molekyylien ominaisuuksia kun RNAi menetelmää sovelletaan lääketieteellisiin tarkoituksiin. Tartraatti-resistentti hapan fosfataasi (TRACP) on entsyymi, joka esiintyy luunsyöjäsoluissa eli osteoklasteissa, antigeenejä esittelevissä dendiriittisissä soluissa sekä eri kudosten makrofageissa, jotka ovat syöjäsoluja. TRACP entsyymin biologista tehtävää ei ole saatu selville, mutta oletetaan että TRACP entsyymin kyvyllä tuottaa reaktiivisia happiradikaaleja on tehtävä sekä luuta hajoittavissa osteoklasteissa sekä antigeenia esittelevissä dendriittisissä soluissa. Makrofageilla, jotka yliekpressoivat TRACP entsyymiä, on myös solunsisäinen reaktiivisten happiradikaalien tuotanto sekä bakteerin tappokyky lisääntynyt. TRACP-geenin hiljentämiseen tarkoitetut spesifiset DNA ja siRNA molekyylit aiheuttivat monosyytti-makrofagilinjan soluviljelymallissa TRACP entsyymin tuoton lisääntymistä odotusten vastaisesti. DNA ja RNA molekyylien vaikutusta TRACP entsyymin tuoton lisääntymiseen tutkittiin myös Tolllike reseptori 9 (TLR9) poistogeenisestä hiirestä eristetyissä monosyyttimakrofaagisoluissa. TRACP entsyymin tuoton lisääntyminen todettiin sekvenssistä ja TLR9:stä riippumattomaksi vasteeksi solun ulkopuolisia DNA ja RNA molekyylejä vastaan. Havainto TRACP entsyymin tuoton lisääntymisestä viittaa siihen, että TRACP entsyymillä on tehtävä solun immuunipuolustusjärjestelmässä.
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
Tulevaisuudessa telekommunikaatioala tulee keskittymään pitkälti langattomiin sovelluksiin ja lisäarvopalveluihin. Tuottaakseen näitä palveluja alan yritykset tekevät yhteistyötä laajan kehittäjäjoukon kanssa. Työn tavoitteena oli parantaa case-yrityksen jo olemassaolevaa toimintamallia, jota se soveltaa yhteistyössään kehittäjien kanssa. Tutkimus keskittyy mobiiliapplikaatiokehittäjiin. Toimintamalli kattaa pääasiassa palvelutarjonnan kehittäjä-allianssissa.Jotta toimintamalliin pystyttäisiin tekemään strategisia muutoksia, oli aluksi tärkeä tunnistaa kehittäjien tarpeet ja toiseksi tarkkailla ja analysoida ympäristöä ja sillä tavoin tunnistaa pääkilpailijat ja heidän tarjontansa mobiiliapplikaatiokehittäjille. Tutkimus toteutettiin suorittamalla postikysely kehittäjille ja toisaalta tekemällä laadullinen tutkimus kilpailijoista. Kilpailutilanteen luonne ja potentiaaliset kilpailijat olivat tunnistettavissa. Parannusehdotukset sisälsivät sekä yleisiä että palvelukohtaisia parannuksia.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
The main objective of this thesis is to show that plate strips subjected to transverse line loads can be analysed by using the beam on elastic foundation (BEF) approach. It is shown that the elastic behaviour of both the centre line section of a semi infinite plate supported along two edges, and the free edge of a cantilever plate strip can be accurately predicted by calculations based on the two parameter BEF theory. The transverse bending stiffness of the plate strip forms the foundation. The foundation modulus is shown, mathematically and physically, to be the zero order term of the fourth order differential equation governing the behaviour of BEF, whereas the torsion rigidity of the plate acts like pre tension in the second order term. Direct equivalence is obtained for harmonic line loading by comparing the differential equations of Levy's method (a simply supported plate) with the BEF method. By equating the second and zero order terms of the semi infinite BEF model for each harmonic component, two parameters are obtained for a simply supported plate of width B: the characteristic length, 1/ λ, and the normalized sum, n, being the effect of axial loading and stiffening resulting from the torsion stiffness, nlin. This procedure gives the following result for the first mode when a uniaxial stress field was assumed (ν = 0): 1/λ = √2B/π and nlin = 1. For constant line loading, which is the superimposition of harmonic components, slightly differing foundation parameters are obtained when the maximum deflection and bending moment values of the theoretical plate, with v = 0, and BEF analysis solutions are equated: 1 /λ= 1.47B/π and nlin. = 0.59 for a simply supported plate; and 1/λ = 0.99B/π and nlin = 0.25 for a fixed plate. The BEF parameters of the plate strip with a free edge are determined based solely on finite element analysis (FEA) results: 1/λ = 1.29B/π and nlin. = 0.65, where B is the double width of the cantilever plate strip. The stress biaxial, v > 0, is shown not to affect the values of the BEF parameters significantly the result of the geometric nonlinearity caused by in plane, axial and biaxial loading is studied theoretically by comparing the differential equations of Levy's method with the BEF approach. The BEF model is generalised to take into account the elastic rotation stiffness of the longitudinal edges. Finally, formulae are presented that take into account the effect of Poisson's ratio, and geometric non linearity, on bending behaviour resulting from axial and transverse inplane loading. It is also shown that the BEF parameters of the semi infinite model are valid for linear elastic analysis of a plate strip of finite length. The BEF model was verified by applying it to the analysis of bending stresses caused by misalignments in a laboratory test panel. In summary, it can be concluded that the advantages of the BEF theory are that it is a simple tool, and that it is accurate enough for specific stress analysis of semi infinite and finite plate bending problems.
Resumo:
Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.