841 resultados para Systematic analysis
Resumo:
Internationalization and the following rapid growth have created the need to concentrate the IT systems of many small-to-medium-sized production companies. Enterprise Resource Planning systems are a common solution for such companies. Deployment of these ERP systems consists of many steps, one of which is the implementation of the same shared system at all international subsidiaries. This is also one of the most important steps in the internationalization strategy of the company from the IT point of view. The mechanical process of creating the required connections for the off-shore sites is the easiest and most well-documented step along the way, but the actual value of the system, once operational, is perceived in its operational reliability. The operational reliability of an ERP system is a combination of many factors. These factors vary from hardware- and connectivity-related issues to administrative tasks and communication between decentralized administrative units and sites. To accurately analyze the operational reliability of such system, one must take into consideration the full functionality of the system. This includes not only the mechanical and systematic processes but also the users and their administration. All operational reliability in an international environment relies heavily on hardware and telecommunication adequacy so it is imperative to have resources dimensioned with regard to planned usage. Still with poorly maintained communication/administration schemes no amount of bandwidth or memory will be enough to maintain a productive level of reliability. This thesis work analyzes the implementation of a shared ERP system to an international subsidiary of a Finnish production company. The system is Microsoft Dynamics Ax, currently being introduced to a Slovakian facility, a subsidiary of Peikko Finland Oy. The primary task is to create a feasible base of analysis against which the operational reliability of the system can be evaluated precisely. With a solid analysis the aim is to give recommendations on how future implementations are to be managed.
Resumo:
BACKGROUND: One of the most frequent complications of pancreaticoduodenectomy (PD) is delayed gastric emptying (DGE). The study aim was to evaluate the impact of the type of gastro/duodenojejunal reconstruction (antecolic vs. retrocolic) after PD on DGE incidence. METHODS: A systematic review was made according to the PRISMA guidelines. Randomized controlled trials (RCTs) comparing antecolic vs. retrocolic reconstruction were included irrespective of the PD techniques. A meta-analysis was then performed. RESULTS: Six RCTs were included for a total of 588 patients. The overall quality was good. General risk of bias was low. DGE was not statistically significantly different between the antecolic and retrocolic group (OR 0.6, 95% CI 0.31-1.16, p = 0.13). The other main surgery-related complications (pancreatic fistula, hemorrhage, intra-abdominal abscess, bile leak and wound infection) were not dependent on the reconstruction route (OR 0.84, 95% CI 0.41-1.70, p = 0.63). No statistically significant difference in terms of length of hospital stay was found between the 2 groups. There was also no difference of DGE incidence if only pylorus-preserving PD was considered and between the DGE grades A, B or C. CONCLUSION: This meta-analysis shows that antecolic reconstruction after PD is not superior to retrocolic reconstruction in terms of DGE.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
The objective of this study was to evaluate the methodological characteristics of cost-effectiveness evaluations carried out in Spain, since 1990, which include LYG as an outcome to measure the incremental cost-effectiveness ratio. METHODS: A systematic review of published studies was conducted describing their characteristics and methodological quality. We analyse the cost per LYG results in relation with a commonly accepted Spanish cost-effectiveness threshold and the possible relation with the cost per quality adjusted life year (QALY) gained when they both were calculated for the same economic evaluation. RESULTS: A total of 62 economic evaluations fulfilled the selection criteria, 24 of them including the cost per QALY gained result as well. The methodological quality of the studies was good (55%) or very good (26%). A total of 124 cost per LYG results were obtained with a mean ratio of 49,529
Resumo:
Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode.
Resumo:
BACKGROUND: Many publications report the prevalence of chronic kidney disease (CKD) in the general population. Comparisons across studies are hampered as CKD prevalence estimations are influenced by study population characteristics and laboratory methods. METHODS: For this systematic review, two researchers independently searched PubMed, MEDLINE and EMBASE to identify all original research articles that were published between 1 January 2003 and 1 November 2014 reporting the prevalence of CKD in the European adult general population. Data on study methodology and reporting of CKD prevalence results were independently extracted by two researchers. RESULTS: We identified 82 eligible publications and included 48 publications of individual studies for the data extraction. There was considerable variation in population sample selection. The majority of studies did not report the sampling frame used, and the response ranged from 10 to 87%. With regard to the assessment of kidney function, 67% used a Jaffe assay, whereas 13% used the enzymatic assay for creatinine determination. Isotope dilution mass spectrometry calibration was used in 29%. The CKD-EPI (52%) and MDRD (75%) equations were most often used to estimate glomerular filtration rate (GFR). CKD was defined as estimated GFR (eGFR) <60 mL/min/1.73 m(2) in 92% of studies. Urinary markers of CKD were assessed in 60% of the studies. CKD prevalence was reported by sex and age strata in 54 and 50% of the studies, respectively. In publications with a primary objective of reporting CKD prevalence, 39% reported a 95% confidence interval. CONCLUSIONS: The findings from this systematic review showed considerable variation in methods for sampling the general population and assessment of kidney function across studies reporting CKD prevalence. These results are utilized to provide recommendations to help optimize both the design and the reporting of future CKD prevalence studies, which will enhance comparability of study results.
Resumo:
Contemporary public administrations have become increasingly more complex, having to cordinate actions with emerging actors in the public and the private spheres. In this scenario the modern ICTs have begun to be seen as an ideal vehicle to resolve some of the problems of public administration. We argue that there is a clear need to explore the extent to which public administrations are undergoing a process of transformation towards a netowork government linked to the systematic incorporation of ICTs in their basic activities. Through critically analysing a selection of e-government evaluation reports, we conclude that research should be carried out if we are to build a solid government assessment framework based on network-like organisation characteristics.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Fetuses of mothers with gestational diabetes mellitus are at increased risk to develop perinatal complications mainly due to macrosomia. However, in view of the marked heterogeneity of this disease, it seems difficult to set guidelines for diagnosis and treatment. This complicates the choice of assigning patients either to diet or to insulin therapy. Also of concern is how much benefit could be expected from insulin therapy in preventing fetal complications in these patients. In a systematic review of the literature assessing the efficacy of insulin in preventing macrosomia in fetuses of mothers with gestational diabetes, we found six randomized controlled trials comparing diet alone to diet plus insulin. The studies included a total of 1281 patients (644 in the diet plus insulin group and 637 in the diet group), with marked differences among trials concerning diagnostic criteria, randomization process and treatment goals. Meta-analysis of the data resulted in a risk difference of -0.098 (95%CI: -0.168 to -0.028), and a number-necessary-to-treat of 11 (95%CI: 6 to 36), which means that it is necessary to treat 11 patients with insulin to prevent one case of macrosomia. This indicates a potential benefit of insulin, but not significantly enough to set treatment guidelines. Because of the heterogeneous evidence available in the literature about this matter, we conclude that larger trials addressing the efficacy of these two therapeutic modalities in preventing macrosomia are warranted.
Resumo:
The aim of the present study was to compare the efficacy of chemotherapy and support treatment in patients with advanced non-resectable gastric cancer in a systematic review and meta-analysis of randomized clinical trials that included a comparison of chemotherapy and support care treatment in patients diagnosed with gastric adenocarcinoma, regardless of their age, gender or place of treatment. The search strategy was based on the criteria of the Cochrane Base, using the following key words: 1) randomized clinical trials and antineoplastic combined therapy or gastrointestinal neoplasm, 2) stomach neoplasm and drug therapy, 3) clinical trial and multi-modality therapy, 4) stomach neoplasm and drug therapy or quality of life, 5) double-blind method or clinical trial. The search was carried out using the Cochrane, Medline and Lilacs databases. Five studies fulfilled the inclusion criteria, for a total of 390 participants, 208 (53%) receiving chemotherapy, 182 (47%) receiving support care treatment and 6 losses (1.6%). The 1-year survival rate was 8% for support care and 20% for chemotherapy (RR = 2.14, 95% CI = 1.00-4.57, P = 0.05); 30% of the patients in the chemotherapy group and 12% in the support care group attained a 6-month symptom-free period (RR = 2.33, 95% CI = 1.41-3.87, P < 0.01). Quality of life evaluated after 4 months was significantly better for the chemotherapy patients (34%; RR = 2.07, 95% CI = 1.31-3.28, P < 0.01) with tumor mass reduction (RR = 3.32, 95% CI = 0.77-14.24, P = 0.1). Chemotherapy increased the 1-year survival rate of the patients and provided a longer symptom-free period of 6 months and an improvement in quality of life.
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
The radial approach is widely used in the treatment of patients with coronary artery disease. We conducted a meta-analysis of published results on the efficacy and safety of the left and right radial approaches in patients undergoing percutaneous coronary procedures. A systematic search of reference databases was conducted, and data from 14 randomized controlled trials involving 6870 participants were analyzed. The left radial approach was associated with significant reductions in fluoroscopy time [standardized mean difference (SMD)=-0.14, 95% confidence interval (CI)=-0.19 to -0.09; P<0.00001] and contrast volume (SMD=-0.07, 95%CI=-0.12 to -0.02; P=0.009). There were no significant differences in rate of procedural failure of the left and the right radial approaches [risk ratios (RR)=0.98; 95%CI=0.77-1.25; P=0.88] or procedural time (SMD=-0.05, 95%CI=0.17-0.06; P=0.38). Tortuosity of the subclavian artery (RR=0.27, 95%CI=0.14-0.50; P<0.0001) was reported more frequently with the right radial approach. A greater number of catheters were used with the left than with the right radial approach (SMD=0.25, 95%CI=0.04-0.46; P=0.02). We conclude that the left radial approach is as safe as the right radial approach, and that the left radial approach should be recommended for use in percutaneous coronary procedures, especially in percutaneous coronary angiograms.
Resumo:
This thesis presented the overview of Open Data research area, quantity of evidence and establishes the research evidence based on the Systematic Mapping Study (SMS). There are 621 such publications were identified published between years 2005 and 2014, but only 243 were selected in the review process. This thesis highlights the implications of Open Data principals’ proliferation in the emerging era of the accessibility, reusability and sustainability of data transparency. The findings of mapping study are described in quantitative and qualitative measurement based on the organization affiliation, countries, year of publications, research method, star rating and units of analysis identified. Furthermore, units of analysis were categorized by development lifecycle, linked open data, type of data, technical platforms, organizations, ontology and semantic, adoption and awareness, intermediaries, security and privacy and supply of data which are important component to provide a quality open data applications and services. The results of the mapping study help the organizations (such as academia, government and industries), re-searchers and software developers to understand the existing trend of open data, latest research development and the demand of future research. In addition, the proposed conceptual framework of Open Data research can be adopted and expanded to strengthen and improved current open data applications.