943 resultados para framework-intensive applications
Resumo:
Molecular Characteristics of Neuroblastoma with Special Reference to Novel Prognostic Factors and Diagnostic Applications Department of Medical Biochemistry and Genetics Annales Universitatis Turkuensis, Medica-Odontologica, 2009, Turku, Finland Painosalama Oy, Turku, Finland 2009 Background: Neuroblastoma, which is the most common and extensively studied childhood solid cancer, shows a great clinical and biological heterogeneity. Most of the neuroblastoma patients older than one year have poor prognosis despite intensive therapies. The hallmark of neuroblastoma, biological heterogeneity, has hindered the discovery of prognostic tumour markers. At present, few molecular markers, such as MYCN oncogene status, have been adopted into clinical practice. Aims: The aim of the study was to improve the current prognostic methodology of neuroblastoma, especially by taking cognizance of the biological heterogeneity of neuroblastoma. Furthermore, unravelling novel molecular characteristics which associate with neuroblastoma tumour progression and cell differentiation was an additional objective. Results: A new strictly defined selection of neuroblastoma tumour spots of highest proliferation activity, hotspots, appeared to be representative and reliable in an analysis of MYCN amplification status using a chromogenic in situ hybridization technique (CISH). Based on the hotspot tumour tissue microarray immunohistochemistry and high-resolution oligo-array-based comparative genomic hybridization, which was integrated with gene expression and in silico analysis of existing transcriptomics, a polysialylated neural cell adhesion molecule (NCAM) and poorly characterized amplicon at 12q24.31 were discovered to associate with outcome. In addition, we found that a previously considered new neuroblastoma treatment target, the mutated c-kit receptor, was not mutated in neuroblastoma samples. Conclusions: Our studies indicate polysialylated NCAM and 12q24.31 amplicon to be new molecular markers with important value in prognostic evaluation of neuroblastoma. Moreover, the presented hotspot tumour tissue microarray method together with the CISH technique of the MYCN oncogene copy number is directly applicable to clinical use. Key words: neuroblastoma, polysialic acid, neural cell adhesion molecule, MYCN, c-kit, chromogenic in situ hybridization, hotspot
Resumo:
In recent times of global turmoil, the need for uncertainty management has become ever momentous. The need for enhanced foresight especially concerns capital-intensive industries, which need to commit their resources and assets with long-term planning horizons. Scenario planning has been acknowledged to have many virtues - and limitations - concerning the mapping of the future and illustrating the alternative development paths. The present study has been initiated to address both the need of improved foresight in two capital-intensive industries, i.e. the paper and steel industries and the imperfections in the current scenario practice. The research problem has been approached by engendering a problem-solving vehicle, which combines, e.g. elements of generic scenario process, face-to-face group support methods, deductive scenario reasoning and causal mapping into a fully integrated scenario process. The process, called the SAGES scenario framework, has been empirically tested by creating alternative futures for two capital-intensive industries, i.e. the paper and steel industries. Three scenarios for each industry have been engendered together with the identification of the key megatrends, the most important foreign investment determinants, key future drivers and leading indicators for the materialisation of the scenarios. The empirical results revealed a two-fold outlook for the paper industry, while the steel industry future was seen as much more positive. The research found support for utilising group support systems in scenario and strategic planning context with some limitations. Key perceived benefits include high time-efficiency, productivity and lower resource-intensiveness. Group support also seems to enhance participant satisfaction, encourage innovative thinking and provide the users with personalised qualitative scenarios.
Resumo:
Immaturity of the gut barrier system in the newborn has been seen to underlie a number of chronic diseases originating in infancy and manifesting later in life. The gut microbiota and breast milk provide the most important maturing signals for the gut-related immune system and reinforcement of the gut mucosal barrier function. Recently, the composition of the gut microbiota has been proposed to be instrumental in control of host body weight and metabolism as well as the inflammatory state characterizing overweight and obesity. On this basis, inflammatory Western lifestyle diseases, including overweight development, may represent a potential target for probiotic interventions beyond the well documented clinical applications. The purpose of the present undertaking was to study the efficacy and safety of perinatal probiotic intervention. The material comprised two ongoing, prospective, double-blind NAMI (Nutrition, Allergy, Mucosal immunology and Intestinal microbiota) probiotic interventions. In the mother-infant nutrition and probiotic study altogether 256 women were randomized at their first trimester of pregnancy into a dietary intervention and a control group. The intervention group received intensive dietary counselling provided by a nutritionist, and were further randomized at baseline, double-blind, to receive probiotics (Lactobacillus rhamnosus GG and Bifidobacterium lactis) or placebo. The intervention period extended from the first trimester of pregnancy to the end of exclusive breastfeeding. In the allergy prevention study altogether 159 women were randomized, double-blind, to receive probiotics (Lactobacillus rhamnosus GG) or placebo 4 weeks before expected delivery, the intervention extending for 6 months postnatally. Additionally, patient data on all premature infants with very low birth weight (VLBW) treated in the Department of Paediatrics, Turku University Hospital, during the years 1997 - 2008 were utilized. The perinatal probiotic intervention reduced the risk of gestational diabetes mellitus (GDM) in the mothers and perinatal dietary counselling reduced that of fetal overgrowth in GDM-affected pregnancies. Early gut microbiota modulation with probiotics modified the growth pattern of the child by restraining excessive weight gain during the first years of life. The colostrum adiponectin concentration was demonstrated to be dependent on maternal diet and nutritional status during pregnancy. It was also higher in the colostrum received by normal-weight compared to overweight children at the age of 10 years. The early perinatal probiotic intervention and the postnatal probiotic intervention in VLBW infants were shown to be safe. To conclude, the findings in this study provided clinical evidence supporting the involvement of the initial microbial and nutritional environment in metabolic programming of the child. The manipulation of early gut microbial communities with probiotics might offer an applicable strategy to impact individual energy homeostasis and thus to prevent excessive body-weight gain. The results add weight to the hypothesis that interventions aiming to prevent obesity and its metabolic consequences later in life should be initiated as early as during the perinatal period.
Resumo:
Information technology service management has become an important task in delivering information for management purposes. Especially applications containing information for decision making need to be agile and ready for changes. The aim of this study was to find a solution for successful implementation of an ITIL based change management process to enterprise resource management applications managed by the target organization. Literature review of the study introduces frameworks that are important for success of an IT project implementation. In addition an overview of ITIL and ITIL based change management process is presented. The result of the study was a framework of actions that are needed to accomplish to be successful in change management process implementation. It was noticed that defining success criterions, critical success factors and success measures is important for achieving the goals of the implementation project.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Scrum is an agile project management approach that has been widely practiced in the software development projects. It has proven to increase quality, productivity, customer satisfaction, transparency and team morale among other benefits from its implementation. The concept of scrum is based on the concepts of incremental innovation strategies, lean manufacturing, kaizen, iterative development and so on and is usually contrasted with the linear development models such as the waterfall method in the software industry. The traditional approaches to project management such as the waterfall method imply intensive upfront planning and approval of the entire project. These sort of approaches work well in the well-defined stable environments where all the specifications of the project are known in the beginning. However, in the uncertain environments when a project requires continuous development and incorporation of new requirements, they do not tend to work well. The scrum framework was inspiraed by Nonaka’s article about new product developement and was later adopted by software development practitioners. This research explores conditions for and benefits of the application of scrum framework beyond software development projects. There are currently a few case studies on the scrum implementation in non-software projects, but there is a noticeable trend of it in the scrum practitioners’ community. The research is based on the real-life context multiple case study analysis of three different non-software projects. The results of the research showed that in order to succeed within scrum projects need to satisfy certain conditions – necessary and sufficient. Among them the key factors are uncertainty of the project environment, not well defined outcomes, commitment of the scrum teams and management support. The top advantages of scrum implementation identified in the present research include improved transparency, accountability, team morale, communications, cooperation and collaboration. Further researches are advised to be carried out in order to validate these findings on a larger sample and to focus on more specific areas of scrum project management implementation.
Resumo:
Transportation of fluids is one of the most common and energy intensive processes in the industrial and HVAC sectors. Pumping systems are frequently subject to engineering malpractice when dimensioned, which can lead to poor operational efficiency. Moreover, pump monitoring requires dedicated measuring equipment, which imply costly investments. Inefficient pump operation and improper maintenance can increase energy costs substantially and even lead to pump failure. A centrifugal pump is commonly driven by an induction motor. Driving the induction motor with a frequency converter can diminish energy consumption in pump drives and provide better control of a process. In addition, induction machine signals can also be estimated by modern frequency converters, dispensing with the use of sensors. If the estimates are accurate enough, a pump can be modelled and integrated into the frequency converter control scheme. This can open the possibility of joint motor and pump monitoring and diagnostics, thereby allowing the detection of reliability-reducing operating states that can lead to additional maintenance costs. The goal of this work is to study the accuracy of rotational speed, torque and shaft power estimates calculated by a frequency converter. Laboratory tests were performed in order to observe estimate behaviour in both steady-state and transient operation. An induction machine driven by a vector-controlled frequency converter, coupled with another induction machine acting as load was used in the tests. The estimated quantities were obtained through the frequency converter’s Trend Recorder software. A high-precision, HBM T12 torque-speed transducer was used to measure the actual values of the aforementioned variables. The effect of the flux optimization energy saving feature on the estimate quality was also studied. A processing function was developed in MATLAB for comparison of the obtained data. The obtained results confirm the suitability of this particular converter to provide accurate enough estimates for pumping applications.
Resumo:
The main strengths of professional knowledge-intensive business services (P-KIBS) are knowledge and creativity which needs to be fostered, maintained and supported. The process of managing P-KIBS companies deals with financial, operational and strategic risks. That is why it is reasonable to apply risk management techniques and frameworks in this context. A significant challenge hides in choosing reasonable ways of implementing risk management, which will not limit creative ability in organization, and furthermore will contribute to the process. This choice is related to a risk intelligent approach which becomes a justified way of finding the required balance. On a theoretical level the field of managing both creativity and risk intelligence as a balanced process remains understudied in particular within KIBS industry. For instance, there appears to be a wide range of separate models for innovation and risk management, but very little discussion in terms of trying to find the right balance between them. This study aims to shed light on the importance of well-managed combination of these concepts. The research purpose of the present study is to find out how the balance between creativity and risk intelligence can be managed in P-KIBS. The methodological approach utilized in the study is strictly conceptual without empirical aspects. The research purpose can be achieved through answering the following research supporting questions: 1. What are the characteristics and role of creativity as a component of innovation process in a P-KIBS company? 2. What are the characteristics and role of risk intelligence as an approach towards risk management process implementation in a P-KIBS company? 3. How can risk intelligence and creativity be balanced in P-KIBS? The main theoretical contribution of the study conceals in a proposed creativity and risk intelligence stage process framework. It is designed as an algorithm that can be applied on organizational canvas. It consists of several distinct stages specified by actors involved, their roles and implications. Additional stage-wise description provides detailed tasks for each of the enterprise levels, while combining strategies into one. The insights driven from the framework can be utilized by a vast range of specialists from strategists to risk managers, and from innovation managers to entrepreneurs. Any business that is designing and delivering knowledge service can potentially gain valuable thoughts and expand conceptual understanding from the present report. Risk intelligence in the current study is a unique way of emphasizing the role of creativity in professional knowledge-intensive industry and a worthy technique for making profound decisions towards risks.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
As technology has developed it has increased the number of data produced and collected from business environment. Over 80% of that data includes some sort of reference to geographical location. Individuals have used that information by utilizing Google Maps or different GPS devices, however such information has remained unexploited in business. This thesis will study the use and utilization of geographically referenced data in capital-intensive business by first providing theoretical insight into how data and data-driven management enables and enhances the business and how especially geographically referenced data adds value to the company and then examining empirical case evidence how geographical information can truly be exploited in capital-intensive business and what are the value adding elements of geographical information to the business. The study contains semi-structured interviews that are used to scan attitudes and beliefs of an organization towards the geographic information and to discover fields of applications for the use of geographic information system within the case company. Additionally geographical data is tested in order to illustrate how the data could be used in practice. Finally the outcome of the thesis provides understanding from which elements the added value of geographical information in business is consisted of and how such data can be utilized in the case company and in capital-intensive business.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
This thesis reports investigations on applying the Service Oriented Architecture (SOA) approach in the engineering of multi-platform and multi-devices user interfaces. This study has three goals: (1) analyze the present frameworks for developing multi-platform and multi-devices applications, (2) extend the principles of SOA for implementing a multi-platform and multi-devices architectural framework (SOA-MDUI), (3) applying and validating the proposed framework in the context of a specific application. One of the problems addressed in this ongoing research is the large amount of combinations for possible implementations of applications on different types of devices. Usually it is necessary to take into account the operating system (OS), user interface (UI) including the appearance, programming language (PL) and architectural style (AS). Our proposed approach extended the principles of SOA using patterns-oriented design and model-driven engineering approaches. Synthesizing the present work done in these domains, this research built and tested an engineering framework linking Model-driven Architecture (MDA) and SOA approaches to developing of UI. This study advances general understanding of engineering, deploying and managing multi-platform and multi-devices user interfaces as a service.
Resumo:
Retrograde autologous priming (RAP) has been routinely applied in cardiac pediatric cardiopulmonary bypass (CPB). However, this technique is performed in pediatric patients weighing more than 20 kg, and research about its application in pediatric patients weighing less than 20 kg is still scarce. This study explored the clinical application of RAP in CPB in pediatric patients undergoing cardiac surgery. Sixty pediatric patients scheduled for cardiac surgery were randomly divided into control and experimental groups. The experimental group was treated with CPB using RAP, while the control group was treated with conventional CPB (priming with suspended red blood cells, plasma and albumin). The hematocrit (Hct) and lactate (Lac) levels at different perioperative time-points, mechanical ventilation time, hospitalization duration, and intraoperative and postoperative blood usage were recorded. Results showed that Hct levels at 15 min after CPB beginning (T2) and at CPB end (T3), and number of intraoperative blood transfusions were significantly lower in the experimental group (P<0.05). There were no significant differences in CPB time, aortic blocking time, T2-Lac value or T3-Lac between the two groups (P>0.05). Postoperatively, there were no significant differences in Hct (2 h after surgery), mechanical ventilation time, intensive care unit time, or postoperative blood transfusion between two groups (P>0.05). RAP can effectively reduce the hemodilution when using less or not using any banked blood, while meeting the intraoperative perfusion conditions, and decreasing the perioperative blood transfusion volume in pediatric patients.
Resumo:
ICT contributed to about 0.83 GtCO2 emissions where the 37% comes from the telecoms infrastructures. At the same time, the increasing cost of energy has been hindering the industry in providing more affordable services for the users. One of the sources of these problems is said to be the rigidity of the current network infrastructures which limits innovations in the network. SDN (Software Defined Network) has emerged as one of the prominent solutions with its idea of abstraction, visibility, and programmability in the network. Nevertheless, there are still significant efforts needed to actually utilize it to create a more energy and environmentally friendly network. In this paper, we suggested and developed a platform for developing ecology-related SDN applications. The main approach we take in realizing this goal is by maximizing the abstractions provided by OpenFlow and to expose RESTful interfaces to modules which enable energy saving in the network. While OpenFlow is made to be the standard for SDN protocol, there are still some mechanisms not defined in its specification such as settings related to Quality of Service (QoS). To solve this, we created REST interfaces for setting of QoS in the switches which can maximize network utilization. We also created a module for minimizing the required network resources in delivering packets across the network. This is achieved by utilizing redundant links when it is needed, but disabling them when the load in the network decreases. The usage of multi paths in a network is also evaluated for its benefit in terms of transfer rate improvement and energy savings. Hopefully, the developed framework can be beneficial for developers in creating applications for supporting environmentally friendly network infrastructures.