808 resultados para Fingerprints Bayesian decision theory Value of information Influence diagram
Resumo:
The main objective of this study is to assess the potential of the information technology industry in the Saint Petersburg area to become one of the new key industries in the Russian economy. To achieve this objective, the study analyzes especially the international competitiveness of the industry and the conditions for clustering. Russia is currently heavily dependent on its natural resources, which are the main source of its recent economic growth. In order to achieve good long-term economic performance, Russia needs diversification in its well-performing industries in addition to the ones operating in the field of natural resources. The Russian government has acknowledged this and started special initiatives to promote such other industries as information technology and nanotechnology. An interesting industry that is basically less than 20 years old and fast growing in Russia, is information technology. Information technology activities and markets are mainly concentrated in Russia’s two biggest cities, Moscow and Saint Petersburg, and areas around them. The information technology industry in the Saint Petersburg area, although smaller than Moscow, is especially dynamic and is gaining increasing foreign company presence. However, the industry is not yet internationally competitive as it lacks substantial and sustainable competitive advantages. The industry is also merely a potential global information technology cluster, as it lacks the competitive edge and a wide supplier and manufacturing base and other related parts of the whole information technology value system. Alone, the industry will not become a key industry in Russia, but it will, on the other hand, have an important supporting role for the development of other industries. The information technology market in the Saint Petersburg area is already large and if more tightly integrated to Moscow, they will together form a huge and still growing market sufficient for most companies operating in Russia currently and in the future. Therefore, the potential of information technology inside Russia is immense.
Resumo:
Recent theory predicts harsh and stochastic conditions to generally promote the evolution of cooperation. Here, we test experimentally whether stochasticity in economic losses also affects the value of reputation in indirect reciprocity, a type of cooperation that is very typical for humans. We used a repeated helping game with observers. One subject (the "Unlucky") lost some money, another one (the "Passer-by") could reduce this loss by accepting a cost to herself, thereby building up a reputation that could be used by others in later interactions. The losses were either stable or stochastic, but the average loss over time and the average efficiency gains of helping were kept constant in both treatments. We found that players with a reputation of being generous were generally more likely to receive help by others, such that investing into a good reputation generated long-term benefits that compensated for the immediate costs of helping. Helping frequencies were similar in both treatments, but players with a reputation to be selfish lost more resources under stochastic conditions. Hence, returns on investment were steeper when losses varied than when they did not. We conclude that this type of stochasticity increases the value of reputation in indirect reciprocity.
Resumo:
Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.
Resumo:
PURPOSE: Pretreatment measurements of systemic inflammatory response, including the Glasgow prognostic score (GPS), the neutrophil-to-lymphocyte ratio (NLR), the monocyte-to-lymphocyte ratio (MLR), the platelet-to-lymphocyte ratio (PLR) and the prognostic nutritional index (PNI) have been recognized as prognostic factors in clear cell renal cell carcinoma (CCRCC), but there is at present no study that compared these markers. METHODS: We evaluated the pretreatment GPS, NLR, MLR, PLR and PNI in 430 patients, who underwent surgery for clinically localized CCRCC (pT1-3N0M0). Associations with disease-free survival were assessed with Cox models. Discrimination was measured with the C-index, and a decision curve analysis was used to evaluate the clinical net benefit. RESULTS: On multivariable analyses, all measures of systemic inflammatory response were significant prognostic factors. The increase in discrimination compared with the stage, size, grade and necrosis (SSIGN) score alone was 5.8 % for the GPS, 1.1-1.4 % for the NLR, 2.9-3.4 % for the MLR, 2.0-3.3 % for the PLR and 1.4-3.0 % for the PNI. On the simultaneous multivariable analysis of all candidate measures, the final multivariable model contained the SSIGN score (HR 1.40, P < 0.001), the GPS (HR 2.32, P < 0.001) and the MLR (HR 5.78, P = 0.003) as significant variables. Adding both the GPS and the MLR increased the discrimination of the SSIGN score by 6.2 % and improved the clinical net benefit. CONCLUSIONS: In patients with clinically localized CCRCC, the GPS and the MLR appear to be the most relevant prognostic measures of systemic inflammatory response. They may be used as an adjunct for patient counseling, tailoring management and clinical trial design.
Resumo:
Hoitajien informaatioteknologian hyväksyntä ja käyttö psykiatrisissa sairaaloissa Informaatioteknologian (IT) käyttö ei ole ollut kovin merkittävässä roolissa psykiatrisessa hoitotyössä, vaikka IT sovellusten on todettu vaikuttaneen radikaalisti terveydenhuollon palveluihin ja hoitohenkilökunnan työprosesseihin viime vuosina. Tämän tutkimuksen tavoitteena on kuvata psykiatrisessa hoitotyössä toimivan hoitohenkilökunnan informaatioteknologian hyväksyntää ja käyttöä ja luoda suositus, jonka avulla on mahdollista tukea näitä asioita psykiatrisissa sairaaloissa. Tutkimus koostuu viidestä osatutkimuksesta, joissa on hyödynnetty sekä tilastollisia että laadullisia tutkimusmetodeja. Tutkimusaineistot on kerätty yhdeksän akuuttipsykiatrian osaston hoitohenkilökunnan keskuudessa vuosien 2003-2006 aikana. Technology Acceptance Model (TAM) –teoriaa on hyödynnetty jäsentämään tutkimusprosessia sekä syventämään ymmärrystä saaduista tutkimustuloksista. Tutkimus osoitti kahdeksan keskeistä tekijää, jotka saattavat tukea psykiatrisessa sairaalassa toimivien hoitajien tietoteknologiasovellusten hyväksyntää ja hyödyntämistä, kun nämä tekijät otetaan huomioon uusia sovelluksia käyttöönotettaessa. Tekijät jakautuivat kahteen ryhmään; ulkoiset tekijät (resurssien suuntaaminen, yhteistyö, tietokonetaidot, IT koulutus, sovelluksen käyttöön liittyvä harjoittelu, potilas-hoitaja suhde), sekä käytön helppous ja sovelluksen käytettävyys (käytön ohjeistus, käytettävyyden varmistaminen). TAM teoria todettiin käyttökelpoiseksi tulosten tulkinnassa. Kehitetty suositus sisältää ne toimenpiteet, joiden avulla on mahdollista tukea sekä organisaation johdon että hoitohenkilökunnan sitoutumista ja tätä kautta varmistaa uuden sovelluksen hyväksyntä ja käyttö hoitotyössä. Suositusta on mahdollista hyödyntää käytännössä kun uusia tietojärjestelmiä implementoidaan käyttöön psykiatrisissa sairaaloissa.
Resumo:
Tissue-based biomarkers are studied to receive information about the pathologic processes and cancer outcome, and to enable development of patient-tailored treatments. The aim of this study was to investigate the potential prognostic and/or predictive value of selected biomarkers in colorectal cancer (CRC). Group IIA secretory phospholipase A2 (IIA PLA2) expression was assessed in 114 samples presenting different phases of human colorectal carcinogenesis. Securin, Ki-67, CD44 variant 6 (CD44v6), aldehyde dehydrogenase 1 (ALDH1) and β-catenin were studied in a material including 227 rectal carcinoma patients treated with short-course preoperative radiotherapy (RT), long-course preoperative (chemo)RT (CRT) or surgery only. Epidermal growth factor receptor (EGFR) gene copy number (GCN), its heterogeneity in CRC tissue, and association with response to EGFR-targeted antibodies cetuximab and panitumumab were analyzed in a cohort of 76 metastatic CRC. IIA PLA2 expression was decreased in invasive carcinomas compared to adenomas, but did not relate to patient survival. High securin expression after long-course (C)RT and high ALDH1 expression in node-negative rectal cancer were independent adverse prognostic factors, ALDH1 specifically in patients treated with adjuvant chemotherapy. The lack of membranous CD44v6 in the rectal cancer invasive front associated with infiltrative growth pattern and the risk of disease recurrence. Heterogeneous EGFR GCN increase predicted benefit from EGFR-targeted antibodies, also in the chemorefractory patient population. In summary, high securin and ALDH1 protein expression independently relate to poor outcome in subgroups of rectal cancer patients, potentially because of resistance to conventional chemotherapeutics. Heterogeneous increase in EGFR GCN was validated to be a promising predictive factor in the treatment of metastatic CRC.
Resumo:
Industrial maintenance can be executed internally, acquired from the original equipment manufacturer or outsourced to a service provider, and this concludes in many different kind of business relationships. To maximize the total value in a maintenance business relationship it is important to know what the partner values. The value of maintenance services can be considered to consist of value elements and the perceived total value for the customer and the service provider is the sum of these value elements. The specific objectives of this thesis are to identify the most important value elements for the maintenance service customer and provider and also to recognize where the value elements differ. The study was executed as a statistical analysis using the survey method. The data has been collected by an online survey sent to 345 maintenance service professionals in Finland. In the survey, four different types of value elements were considered: the customer’s high critical and low critical items and the service provider’s core and support service. The most valued elements by the respondents were reliability, safety at work, environmental safety, and operator knowledge. The least valued elements were asset management factors and access to markets. Statistically significant differences in value elements between service types were also found. As a managerial implication a value gap profile is presented. This Master’s Thesis is part of the MaiSeMa (Industrial Maintenance Services in a Renewing Business Network: Identify, Model and Manage Value) research project where network decision models are created to identify, model and manage the value of maintenance services.
Resumo:
The aim of this study was to investigate the diagnosis delay and its impact on the stage of disease. The study also evaluated a nuclear DNA content, immunohistochemical expression of Ki-67 and bcl-2, and the correlation of these biological features with the clinicopathological features and patient outcome. 200 Libyan women, diagnosed during 2008–2009 were interviewed about the period from the first symptoms to the final histological diagnosis of breast cancer. Also retrospective preclinical and clinical data were collected from medical records on a form (questionnaire) in association with the interview. Tumor material of the patients was collected and nuclear DNA content analysed using DNA image cytometry. The expression of Ki-67 and bcl-2 were assessed using immunohistochemistry (IHC). The studies described in this thesis show that the median of diagnosis time for women with breast cancer was 7.5 months and 56% of patients were diagnosed within a period longer than 6 months. Inappropriate reassurance that the lump was benign was an important reason for prolongation of the diagnosis time. Diagnosis delay was also associated with initial breast symptom(s) that did not include a lump, old age, illiteracy, and history of benign fibrocystic disease. The patients who showed diagnosis delay had bigger tumour size (p<0.0001), positive lymph nodes (p<0.0001), and high incidence of late clinical stages (p<0.0001). Biologically, 82.7% of tumors were aneuploid and 17.3% were diploid. The median SPF of tumors was 11% while the median positivity of Ki-67 was 27.5%. High Ki-67 expression was found in 76% of patients, and high SPF values in 56% of patients. Positive bcl-2 expression was found in 62.4% of tumors. 72.2% of the bcl-2 positive samples were ER-positive. Patients who had tumor with DNA aneuploidy, high proliferative activity and negative bcl-2 expression were associated with a high grade of malignancy and short survival. The SPF value is useful cell proliferation marker in assessing prognosis, and the decision cut point of 11% for SPF in the Libyan material was clearly significant (p<0.0001). Bcl-2 is a powerful prognosticator and an independent predictor of breast cancer outcome in the Libyan material (p<0.0001). Libyan breast cancer was investigated in these studies from two different aspects: health services and biology. The results show that diagnosis delay is a very serious problem in Libya and is associated with complex interactions between many factors leading to advanced stages, and potentially to high mortality. Cytometric DNA variables, proliferative markers (Ki-67 and SPF), and oncoprotein bcl-2 negativity reflect the aggressive behavior of Libyan breast cancer and could be used with traditional factors to predict the outcome of individual patients, and to select appropriate therapy.
Resumo:
The objective of the current research is to investigate brand value generation. The study is conducted in the context of high-technology companies. The research aims at finding the impact of long-term brand development strategies, including advertising investments, R&D investments, R&D intensity, new products developed and design. The empirical part of the study incorporated collection of primary and secondary data on 36 companies operating in high-technology sector and being rated as top companies with the most valuable brands by Interbrand consultancy. The data contained information for six consequent years from 2008 to 2013. Obtained data was analyzed using the methods of fixed effect and random effect model (panel data analysis). The analysis showed positive effect of advertising and R&D investments on brand value of high-technology companies in the long run. The impact of remaining three strategies was not approved and further investigation is required.
Resumo:
This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.
Resumo:
An investor can either conduct independent analysis or rely on the analyses of others. Stock analysts provide markets with expectations regarding particular securities. However, analysts have different capabilities and resources, of which investors are seldom cognizant. The local advantage refers to the advantage stemming from cultural or geographical proximity to securities analyzed. The research has confirmed that local agents are generally more accurate or produce excess returns. This thesis tests the investment value of the local advantage regarding Finnish stocks via target price data. The empirical section investigates the local advantage from several aspects. It is discovered that local analysts were more focused on certain sectors generally located close to consumer markets. Market reactions to target price revisions were generally insignificant with the exception to local positive target prices. Both local and foreign target prices were overly optimistic and exhibited signs of herding. Neither group could be identified as a leader or follower of new information. Additionally, foreign price change expectations were more in line with the quantitative models and ideas such as beta or return mean reversion. The locals were more accurate than foreign analysts in 5 out of 9 sectors and vice versa in one. These sectors were somewhat in line with coverage decisions and buttressed the idea of local advantage stemming from proximity to markets, not to headquarters. The accuracy advantage was dependent on sample years and on the measure used. Local analysts ranked magnitudes of price changes more accurately in optimistic and foreign analysts in pessimistic target prices. Directional accuracy of both groups was under 50% and target prices held no linear predictive power. Investment value of target prices were tested by forming mean-variance efficient portfolios. Parallel to differing accuracies in the levels of expectations foreign portfolio performed better when short sales were allowed and local better when disallowed. Both local and non-local portfolios performed worse than a passive index fund, albeit not statistically significantly. This was in line with previously reported low overall accuracy and different accuracy profiles. Refraining from estimating individual stock returns altogether produced statistically significantly higher Sharpe ratios compared to local or foreign portfolios. The proposed method of testing the investment value of target prices of different groups suffered from some inconsistencies. Nevertheless, these results are of interest to investors seeking the advice of security analysts.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
An investor can either conduct independent analysis or rely on the analyses of others. Stock analysts provide markets with expectations regarding particular securities. However, analysts have different capabilities and resources, of which investors are seldom cognizant. The local advantage refers to the advantage stemming from cultural or geographical proximity to securities analyzed. The research has confirmed that local agents are generally more accurate or produce excess returns. This thesis tests the investment value of the local advantage regarding Finnish stocks via target price data. The empirical section investigates the local advantage from several aspects. It is discovered that local analysts were more focused on certain sectors generally located close to consumer markets. Market reactions to target price revisions were generally insignificant with the exception to local positive target prices. Both local and foreign target prices were overly optimistic and exhibited signs of herding. Neither group could be identified as a leader or follower of new information. Additionally, foreign price change expectations were more in line with the quantitative models and ideas such as beta or return mean reversion. The locals were more accurate than foreign analysts in 5 out of 9 sectors and vice versa in one. These sectors were somewhat in line with coverage decisions and buttressed the idea of local advantage stemming from proximity to markets, not to headquarters. The accuracy advantage was dependent on sample years and on the measure used. Local analysts ranked magnitudes of price changes more accurately in optimistic and foreign analysts in pessimistic target prices. Directional accuracy of both groups was under 50% and target prices held no linear predictive power. Investment value of target prices were tested by forming mean-variance efficient portfolios. Parallel to differing accuracies in the levels of expectations foreign portfolio performed better when short sales were allowed and local better when disallowed. Both local and non-local portfolios performed worse than a passive index fund, albeit not statistically significantly. This was in line with previously reported low overall accuracy and different accuracy profiles. Refraining from estimating individual stock returns altogether produced statistically significantly higher Sharpe ratios compared to local or foreign portfolios. The proposed method of testing the investment value of target prices of different groups suffered from some inconsistencies. Nevertheless, these results are of interest to investors seeking the advice of security analysts.
Resumo:
This thesis examines the quality of credit ratings issued by the three major credit rating agencies - Moody’s, Standard and Poor’s and Fitch. If credit ratings are informative, then prices of underlying credit instruments such as fixed-income securities and credit default insurance should change to reflect the new credit risk information. Using data on 246 different major fixed income securities issuers and spanning January 2000 to December 2011, we find that credit default swaps (CDS) spreads do not react to changes in credit ratings. Hence credit ratings for all three agencies are not price informative. CDS prices are mostly determined by historical CDS prices while ratings are mostly determined by historical ratings. We find that credit ratings are marginally more sensitive to CDS than CDS are sensitive to ratings.