914 resultados para Empirical Mode Decomposition, vibration-based analysis, damage detection, signal decomposition
                                
Resumo:
Tourette syndrome is a childhood-onset neuropsychiatric disorder with a high prevalence of attention deficit hyperactivity and obsessive-compulsive disorder co-morbidities. Structural changes have been found in frontal cortex and striatum in children and adolescents. A limited number of morphometric studies in Tourette syndrome persisting into adulthood suggest ongoing structural alterations affecting frontostriatal circuits. Using cortical thickness estimation and voxel-based analysis of T1- and diffusion-weighted structural magnetic resonance images, we examined 40 adults with Tourette syndrome in comparison with 40 age- and gender-matched healthy controls. Patients with Tourette syndrome showed relative grey matter volume reduction in orbitofrontal, anterior cingulate and ventrolateral prefrontal cortices bilaterally. Cortical thinning extended into the limbic mesial temporal lobe. The grey matter changes were modulated additionally by the presence of co-morbidities and symptom severity. Prefrontal cortical thickness reduction correlated negatively with tic severity, while volume increase in primary somatosensory cortex depended on the intensity of premonitory sensations. Orbitofrontal cortex volume changes were further associated with abnormal water diffusivity within grey matter. White matter analysis revealed changes in fibre coherence in patients with Tourette syndrome within anterior parts of the corpus callosum. The severity of motor tics and premonitory urges had an impact on the integrity of tracts corresponding to cortico-cortical and cortico-subcortical connections. Our results provide empirical support for a patho-aetiological model of Tourette syndrome based on developmental abnormalities, with perturbation of compensatory systems marking persistence of symptoms into adulthood. We interpret the symptom severity related grey matter volume increase in distinct functional brain areas as evidence of ongoing structural plasticity. The convergence of evidence from volume and water diffusivity imaging strengthens the validity of our findings and attests to the value of a novel multimodal combination of volume and cortical thickness estimations that provides unique and complementary information by exploiting their differential sensitivity to structural change.
                                
Resumo:
Schistosomiasis diagnosis is based on the detection of eggs in the faeces, which is laborious and lacks sensitivity, especially for patients with a low parasite burden. Immunological assays for specific antibody detection are available, but they usually demonstrate low sensitivity and/or specificity. In this study, two simple immunological assays were evaluated for the detection of soluble Schistosoma mansoni adult worm preparation (SWAP) and egg-specific IgGs. These studies have not yet been evaluated for patients with low parasite burdens. Residents of an endemic area in Brazil donated sera and faecal samples for our study. The patients were initially diagnosed by a rigorous Kato-Katz analysis of 18 thick smears from four different stool samples. The ELISA-SWAP was successful for human diagnosis with 90% sensitivity and specificity, confirming the Kato-Katz diagnosis with nearly perfect agreement, as seen by the Kappa index (0.85). Although the ELISA-soluble S. mansoni egg antigen was 85% sensitive, it exhibited low specificity (80%; Kappa index: 0.75) and was more susceptible to cross-reactivity. We believe that immunological assays should be used in conjunction with Kato-Katz analysis as a supplementary tool for the diagnosis of schistosomiasis for patients with low infection burdens, which are usually hard to detect.
                                
Resumo:
Queremos dar a conocer el trabajo que desde nuestra Facultad planteamos a nuestros alumnos en la asignatura de habilidades sociales. A partir de la detección y del análisis de sus competencias sociales, se establece un plan de trabajo con la finalidad de que mejoren sus habilidades sociales tanto a nivel personal como profesional. Partimos de una metodología de participación activa donde el alumno se involucra en la organización, el desarrollo y en la evaluación de la asignatura. Después de recoger los datos sobre la satisfacción de los alumnos, consideramos que es una práctica educativa de éxito por lo que presentamos esta experiencia para facilitar su difusión.
                                
Resumo:
RATIONALE AND OBJECTIVES: To systematically review and meta-analyze published data about the diagnostic accuracy of fluorine-18-fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) and PET/computed tomography (CT) in the differential diagnosis between malignant and benign pleural lesions. METHODS AND MATERIALS: A comprehensive literature search of studies published through June 2013 regarding the diagnostic performance of (18)F-FDG-PET and PET/CT in the differential diagnosis of pleural lesions was carried out. All retrieved studies were reviewed and qualitatively analyzed. Pooled sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odds ratio (DOR) of (18)F-FDG-PET or PET/CT in the differential diagnosis of pleural lesions on a per-patient-based analysis were calculated. The area under the summary receiver operating characteristic curve (AUC) was calculated to measure the accuracy of these methods. Subanalyses considering device used (PET or PET/CT) were performed. RESULTS: Sixteen studies including 745 patients were included in the systematic review. The meta-analysis of 11 selected studies provided the following results: sensitivity 95% (95% confidence interval [95%CI]: 92-97%), specificity 82% (95%CI: 76-88%), LR+ 5.3 (95%CI: 2.4-11.8), LR- 0.09 (95%CI: 0.05-0.14), DOR 74 (95%CI: 34-161). The AUC was 0.95. No significant improvement of the diagnostic accuracy considering PET/CT studies only was found. CONCLUSIONS: (18)F-FDG-PET and PET/CT demonstrated to be accurate diagnostic imaging methods in the differential diagnosis between malignant and benign pleural lesions; nevertheless, possible sources of false-negative and false-positive results should be kept in mind.
                                
Resumo:
Aim: We asked whether myocardial flow reserve (MFR) by Rb-82 cardiac PET improve the selection of patients eligible for invasive coronary angiography (ICA). Material and Methods: We enrolled 26 consecutive patients with suspected or known coronary artery disease who performed dynamic Rb-82 PET/CT and (ICA) within 60 days; 4 patients who underwent revascularization or had any cardiovascular events between PET and ICA were excluded. Myocardial blood flow at rest (rMBF), at stress with adenosine (sMBF) and myocardial flow reserve (MFR=sMBF/rMBF) were estimated using the 1-compartment Lortie model (FlowQuant) for each coronary arteries territories. Stenosis severity was assessed using computer-based automated edge detection (QCA). MFR was divided in 3 groups: G1:MFR<1.5, G2:1.5≤MFR<2 and G3:2≤MFR. Stenosis severity was graded as non-significant (<50% or FFR ≥0.8), intermediate (50%≤stenosis<70%) and severe (≥70%). Correlation between MFR and percentage of stenosis were assessed using a non-parametric Spearman test. Results: In G1 (44 vessels), 17 vessels (39%) had a severe stenosis, 11 (25%) an intermediate one, and 16 (36%) no significant stenosis. In G2 (13 vessels), 2 (15%) vessels presented a severe stenosis, 7 (54%) an intermediate one, and 4 (31%) no significant stenosis. In G3 (9 vessels), 0 vessel presented a severe stenosis, 1 (11%) an intermediate one, and 8 (89%) no significant stenosis. Of note, among 11 patients with 3-vessel low MFR<1.5 (G1), 9/11 (82%) had at least one severe stenosis and 2/11 (18%) had at least one intermediate stenosis. There was a significant inverse correlation between stenosis severity and MFR among all 66 territories analyzed (rho= -0.38, p=0.002). Conclusion: Patients with MFR>2 could avoid ICA. Low MFR (G1, G2) on a vessel-based analysis seems to be a poor predictor of severe stenosis severity. Patients with 3-vessel low MFR would benefit from ICA as they are likely to present a significant stenosis in at least one vessel.
                                
Resumo:
One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.
                                
Resumo:
This study reports on the analysis of annual reports from 14- listed companies in Spainover a five-year period, from 1998 to 2002. Companies in the sample are selected on thebasis of their knowledge-based assets and incentives to report on Intellectual Capital.The empirical analysis is twofold:1) Firstly, we analyse the value of intellectual capital using a value-based approach,through the difference between market and book value over the period considered. Results show that there is a general decrease in the 'hidden value' of these companies, probably due to the general trend in stock markets.2) Secondly, we carry out a content-based analysis of the complete annual reports of the companies over the five year period. Preliminary findings seem to suggest that although the level of disclosure has increased over time, this is mainly in the form of narrative. Overall, the level of disclosure of intellectual capital remains low.
                                
Resumo:
Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.
                                
Resumo:
This work proposes an original contribution to the understanding of shermen spatial behavior, based on the behavioral ecology and movement ecology paradigms. Through the analysis of Vessel Monitoring System (VMS) data, we characterized the spatial behavior of Peruvian anchovy shermen at di erent scales: (1) the behavioral modes within shing trips (i.e., searching, shing and cruising); (2) the behavioral patterns among shing trips; (3) the behavioral patterns by shing season conditioned by ecosystem scenarios; and (4) the computation of maps of anchovy presence proxy from the spatial patterns of behavioral mode positions. At the rst scale considered, we compared several Markovian (hidden Markov and semi-Markov models) and discriminative models (random forests, support vector machines and arti cial neural networks) for inferring the behavioral modes associated with VMS tracks. The models were trained under a supervised setting and validated using tracks for which behavioral modes were known (from on-board observers records). Hidden semi-Markov models performed better, and were retained for inferring the behavioral modes on the entire VMS dataset. At the second scale considered, each shing trip was characterized by several features, including the time spent within each behavioral mode. Using a clustering analysis, shing trip patterns were classi ed into groups associated to management zones, eet segments and skippers' personalities. At the third scale considered, we analyzed how ecological conditions shaped shermen behavior. By means of co-inertia analyses, we found signi cant associations between shermen, anchovy and environmental spatial dynamics, and shermen behavioral responses were characterized according to contrasted environmental scenarios. At the fourth scale considered, we investigated whether the spatial behavior of shermen re ected to some extent the spatial distribution of anchovy. Finally, this work provides a wider view of shermen behavior: shermen are not only economic agents, but they are also foragers, constrained by ecosystem variability. To conclude, we discuss how these ndings may be of importance for sheries management, collective behavior analyses and end-to-end models.
                                
Resumo:
The study of the thermal behavior of complex packages as multichip modules (MCM¿s) is usually carried out by measuring the so-called thermal impedance response, that is: the transient temperature after a power step. From the analysis of this signal, the thermal frequency response can be estimated, and consequently, compact thermal models may be extracted. We present a method to obtain an estimate of the time constant distribution underlying the observed transient. The method is based on an iterative deconvolution that produces an approximation to the time constant spectrum while preserving a convenient convolution form. This method is applied to the obtained thermal response of a microstructure as analyzed by finite element method as well as to the measured thermal response of a transistor array integrated circuit (IC) in a SMD package.
                                
Resumo:
OBJECTIVE: To systematically review and meta-analyze published data about the diagnostic performance of Fluorine-18-Fluorodeoxyglucose ((18)F-FDG) positron emission tomography (PET) and PET/computed tomography (PET/CT) in the assessment of pleural abnormalities in cancer patients. METHODS: A comprehensive literature search of studies published through June 2013 regarding the role of (18)F-FDG-PET and PET/CT in evaluating pleural abnormalities in cancer patients was performed. All retrieved studies were reviewed and qualitatively analyzed. Pooled sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odd ratio (DOR) of (18)F-FDG-PET or PET/CT on a per patient-based analysis were calculated. The area under the summary ROC curve (AUC) was calculated to measure the accuracy of these methods in the assessment of pleural abnormalities. Sub-analyses considering (18)F-FDG-PET/CT and patients with lung cancer only were carried out. RESULTS: Eight studies comprising 360 cancer patients (323 with lung cancer) were included. The meta-analysis of these selected studies provided the following results: sensitivity 86% [95% confidence interval (95%CI): 80-91%], specificity 80% [95%CI: 73-85%], LR+ 3.7 [95%CI: 2.8-4.9], LR- 0.18 [95%CI: 0.09-0.34], DOR 27 [95%CI: 13-56]. The AUC was 0.907. No significant improvement considering PET/CT studies only and patients with lung cancer was found. CONCLUSIONS: (18)F-FDG-PET and PET/CT demonstrated to be useful diagnostic imaging methods in the assessment of pleural abnormalities in cancer patients, nevertheless possible sources of false-negative and false-positive results should be kept in mind. The literature focusing on the use of (18)F-FDG-PET and PET/CT in this setting remains still limited and prospective studies are needed.
                                
Resumo:
Background: Several patterns of grey and white matter changes have been separately described in young adults with first-episode psychosis. Concomitant investigation of grey and white matter densities in patients with first-episode psychosis without other psychiatric comorbidities that include all relevant imaging markers could provide clues to the neurodevelopmental hypothesis in schizophrenia. Methods: We recruited patients with first-episode psychosis diagnosed according to the DSM-IV-TR and matched controls. All participants underwent magnetic resonance imaging (MRI). Voxel-based morphometry (VBM) analysis and mean diffusivity voxel-based analysis (VBA) were used for grey matter data. Fractional anisotropy and axial, radial and mean diffusivity were analyzed using tract-based spatial statistics (TBSS) for white matter data. Results: We included 15 patients and 16 controls. The mean diffusivity VBA showed significantly greater mean diffusivity in the first-episode psychosis than in the control group in the lingual gyrus bilaterally, the occipital fusiform gyrus bilaterally, the right lateral occipital gyrus and the right inferior temporal gyrus. Moreover, the TBSS analysis revealed a lower fractional anisotropy in the first-episode psychosis than in the control group in the genu of the corpus callosum, minor forceps, corticospinal tract, right superior longitudinal fasciculus, left middle cerebellar peduncle, left inferior longitudinal fasciculus and the posterior part of the fronto-occipital fasciculus. This analysis also revealed greater radial diffusivity in the first-episode psychosis than in the control group in the right corticospinal tract, right superior longitudinal fasciculus and left middle cerebellar peduncle. Limitations: The modest sample size and the absence of women in our series could limit the impact of our results. Conclusion: Our results highlight the structural vulnerability of grey matter in posterior areas of the brain among young adult male patients with first-episode psychosis. Moreover, the concomitant greater radial diffusivity within several regions already revealed by the fractional anisotropy analysis supports the idea of a late myelination in patients with first-episode psychosis.
                                
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
                                
Resumo:
This project examines similarities and differences between the automated condition data collected on and off county paved roads and the manual condition data collected by Iowa Department of Transportation (DOT) staff in 2000 and 2001. Also, the researchers will provide staff support to the advisory committee in exploring other options to the highway need process. The results show that the automated condition data can be used in a converted highway needs process with no major differences between the two methods. Even though the foundation rating difference was significant, the foundation rating weighting factor in HWYNEEDS is minimal and should not have a major impact. In terms of RUTF formula based distribution, the results clearly show the superiority of the condition-based analysis compared to the non-condition based. That correlation can be further enhanced by adding more distress variables to the analysis.
                                
Resumo:
Diplomityössä on tutkittu reaaliaikaisen toimintolaskennan toteuttamista suomalaisen lasersiruja valmistavan PK-yrityksen tietojärjestelmään. Lisäksi on tarkasteltu toimintolaskennan vaikutuksia operatiiviseen toimintaan sekä toimintojen johtamiseen. Työn kirjallisuusosassa on käsitelty kirjallisuuslähteiden perusteella toimintolaskennan teorioita, laskentamenetelmiä sekä teknisessä toteutuksessa käytettyjä teknologioita. Työn toteutusosassa suunniteltiin ja toteutettiin WWW-pohjainen toimintolaskentajärjestelmä case-yrityksen kustannuslaskennan sekä taloushallinnon avuksi. Työkalu integroitiin osaksi yrityksen toiminnanohjaus- sekä valmistuksenohjausjärjestelmää. Perinteisiin toimintolaskentamallien tiedonkeruujärjestelmiin verrattuna case-yrityksessä syötteet toimintolaskentajärjestelmälle tulevat reaaliaikaisesti osana suurempaa tietojärjestelmäintegraatiota.Diplomityö pyrkii luomaan suhteen toimintolaskennan vaatimusten ja tietokantajärjestelmien välille. Toimintolaskentajärjestelmää yritys voi hyödyntää esimerkiksi tuotteiden hinnoittelussa ja kustannuslaskennassa näkemällä tuotteisiin liittyviä kustannuksia eri näkökulmista. Päätelmiä voidaan tehdä tarkkaan kustannusinformaatioon perustuen sekä määrittää järjestelmän tuottaman datan perusteella, onko tietyn projektin, asiakkuuden tai tuotteen kehittäminen taloudellisesti kannattavaa.
 
                    