940 resultados para Multi-cicle, Expectation, and Conditional Estimation Method
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Resumo:
Aims: The psychometric properties of the EORTC QLQ-BN20, a brain cancer-specific HRQOL questionnaire, have been previously determined in an English-speaking sample of patients. This study examined the validity and reliability of the questionnaire in a multi-national, multi-lingual study. Methods: QLQ-BN20 data were selected from two completed phase III EORTC/NCIC clinical trials in brain cancer (N=891), including 12 languages. Experimental treatments were surgery followed by radiotherapy (RT) and adjuvant PCV chemotherapy or surgery followed by concomitant RT plus temozolomide (TMZ) chemotherapy and adjuvant TMZ chemotherapy. Standard treatment consisted of surgery and postoperative RT alone. The psychometrics of the QLQ-BN20 were examined by means of multi-trait scaling analyses, reliability estimation, known groups validity testing, and responsiveness analysis. Results: All QLQ-BN20 items correlated more strongly with their own scale (r>0.70) than with other QLQ-BN20 scales. Internal consistency reliability coefficients were high (all alpha0.70). Known-groups comparisons yielded positive results, with the QLQ-BN20 distinguishing between patients with differing levels of performance status and mental functioning. Responsiveness of the questionnaire to changes over time was acceptable. Conclusion: The QLQ-BN20 demonstrates adequate psychometric properties and can be recommended for use in conjunction with the QLQ-C30 in assessing the HRQOL of brain cancer patients in international studies.
Resumo:
The changing business environment demands that chemical industrial processes be designed such that they enable the attainment of multi-objective requirements and the enhancement of innovativedesign activities. The requirements and key issues for conceptual process synthesis have changed and are no longer those of conventional process design; there is an increased emphasis on innovative research to develop new concepts, novel techniques and processes. A central issue, how to enhance the creativity of the design process, requires further research into methodologies. The thesis presentsa conflict-based methodology for conceptual process synthesis. The motivation of the work is to support decision-making in design and synthesis and to enhance the creativity of design activities. It deals with the multi-objective requirements and combinatorially complex nature of process synthesis. The work is carriedout based on a new concept and design paradigm adapted from Theory of InventiveProblem Solving methodology (TRIZ). TRIZ is claimed to be a `systematic creativity' framework thanks to its knowledge based and evolutionary-directed nature. The conflict concept, when applied to process synthesis, throws new lights on design problems and activities. The conflict model is proposed as a way of describing design problems and handling design information. The design tasks are represented as groups of conflicts and conflict table is built as the design tool. The general design paradigm is formulated to handle conflicts in both the early and detailed design stages. The methodology developed reflects the conflict nature of process design and synthesis. The method is implemented and verified through case studies of distillation system design, reactor/separator network design and waste minimization. Handling the various levels of conflicts evolve possible design alternatives in a systematic procedure which consists of establishing an efficient and compact solution space for the detailed design stage. The approach also provides the information to bridge the gap between the application of qualitative knowledge in the early stage and quantitative techniques in the detailed design stage. Enhancement of creativity is realized through the better understanding of the design problems gained from the conflict concept and in the improvement in engineering design practice via the systematic nature of the approach.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
This thesis examines coordination of systems development process in a contemporary software producing organization. The thesis consists of a series of empirical studies in which the actions, conceptions and artifacts of practitioners are analyzed using a theory-building case study research approach. The three phases of the thesis provide empirical observations on different aspects of systemsdevelopment. In the first phase is examined the role of architecture in coordination and cost estimation in multi-site environment. The second phase involves two studies on the evolving requirement understanding process and how to measure this process. The third phase summarizes the first two phases and concentrates on the role of methods and how practitioners work with them. All the phases provide evidence that current systems development method approaches are too naïve in looking at the complexity of the real world. In practice, development is influenced by opportunity and other contingent factors. The systems development processis not coordinated using phases and tasks defined in methods providing universal mechanism for managing this process like most of the method approaches assume.Instead, the studies suggest that managing systems development process happens through coordinating development activities using methods as tools. These studies contribute to the systems development methods by emphasizing the support of communication and collaboration between systems development participants. Methods should not describe the development activities and phases in a detail level, butshould include the higher level guidance for practitioners on how to act in different systems development environments.
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
Background: Coxiella burnetii is a highly clonal microorganism which is difficult to culture, requiring BSL3 conditions for its propagation. This leads to a scarce availability of isolates worldwide. On the other hand, published methods of characterization have delineated up to 8 different genomic groups and 36 genotypes. However, all these methodologies, with the exception of one that exhibited limited discriminatory power (3 genotypes), rely on performing between 10 and 20 PCR amplifications or sequencing long fragments of DNA, which make their direct application to clinical samples impracticable and leads to a scarce accessibility of data on the circulation of C. burnetii genotypes. Results: To assess the variability of this organism in Spain, we have developed a novel method that consists of a multiplex (8 targets) PCR and hybridization with specific probes that reproduce the previous classification of this organism into 8 genomic groups, and up to 16 genotypes. It allows for a direct haracterization from clinical and environmental samples in a single run, which will help in the study of the different genotypes circulating in wild and domestic cycles as well as from sporadic human cases and outbreaks. The method has been validated with reference isolates. A high variability of C. burnetii has been found in Spain among 90 samples tested, detecting 10 different genotypes, being those adaA negative associated with acute Q fever cases presenting as fever of intermediate duration with liver involvement and with chronic cases. Genotypes infecting humans are also found in sheep, goats, rats, wild boar and ticks, and the only genotype found in cattle has never been found among our clinical samples. Conclusions: This newly developed methodology has permitted to demonstrate that C. burnetii is highly variable in Spain. With the data presented here, cattle seem not to participate in the transmission of C. burnetii to humans in the samples studied, while sheep, goats, wild boar, rats and ticks share genotypes with the human population.
Resumo:
Työn tavoitteena oli kehittää tutkittavan insinööriyksikön projektien kustannusestimointiprosessia, siten että yksikön johdolla olisi tulevaisuudessa käytettävänään tarkempaa kustannustietoa. Jotta tämä olisi mahdollista, ensin täytyi selvittää yksikön toimintatavat, projektien kustannusrakenteet sekä kustannusatribuutit. Tämän teki mahdolliseksi projektien kustannushistoriatiedon tutkiminen sekä asiantuntijoiden haastattelu. Työn tuloksena syntyi kohdeyksikön muiden prosessien kanssa yhteensopiva kustannusestimointiprosessi sekä –malli.Kustannusestimointimenetelmän ja –mallin perustana on kustannusatribuutit, jotka määritellään erikseen tutkittavassa ympäristössä. Kustannusatribuutit löydetään historiatietoa tutkimalla, eli analysoimalla jo päättyneitä projekteja, projektien kustannusrakenteita sekä tekijöitä, jotka ovat vaikuttaneet kustannusten syntyyn. Tämän jälkeen kustannusatribuuteille täytyy määritellä painoarvot sekä painoarvojen vaihteluvälit. Estimointimallin tarkuutta voidaan parantaa mallin kalibroinnilla. Olen käyttänyt Goal – Question – Metric (GQM) –menetelmää tutkimuksen kehyksenä.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
AIMS: c-Met is an emerging biomarker in pancreatic ductal adenocarcinoma (PDAC); there is no consensus regarding the immunostaining scoring method for this marker. We aimed to assess the prognostic value of c-Met overexpression in resected PDAC, and to elaborate a robust and reproducible scoring method for c-Met immunostaining in this setting. METHODS AND RESULTS: c-Met immunostaining was graded according to the validated MetMab score, a classic visual scale combining surface and intensity (SI score), or a simplified score (high c-Met: ≥20% of tumour cells with strong membranous staining), in stage I-II PDAC. A computer-assisted classification method (Aperio software) was developed. Clinicopathological parameters were correlated with disease-free survival (DFS) and overall survival(OS). One hundred and forty-nine patients were analysed retrospectively in a two-step process. Thirty-seven samples (whole slides) were analysed as a pre-run test. Reproducibility values were optimal with the simplified score (kappa = 0.773); high c-Met expression (7/37) was associated with shorter DFS [hazard ratio (HR) 3.456, P = 0.0036] and OS (HR 4.257, P = 0.0004). c-Met expression was concordant on whole slides and tissue microarrays in 87.9% of samples, and quantifiable with a specific computer-assisted algorithm. In the whole cohort (n = 131), patients with c-Met(high) tumours (36/131) had significantly shorter DFS (9.3 versus 20.0 months, HR 2.165, P = 0.0005) and OS (18.2 versus 35.0 months, HR 1.832, P = 0.0098) in univariate and multivariate analysis. CONCLUSIONS: Simplified c-Met expression is an independent prognostic marker in stage I-II PDAC that may help to identify patients with a high risk of tumour relapse and poor survival.
Resumo:
This article analyses the impact that innovation expenditure and intrasectoral and intersectoral externalities have on productivity in Spanish firms. While there is an extensive literature analysing the relationship between innovation and productivity, in this particular area there are far fewer studies that examine the importance of sectoral externalities, especially with the focus on Spain. One novelty of the study, which covers the industrial and service sectors, is that we also consider jointly the technology level of the sector in which the firm operates and the firm size. The database used is the Technological Innovation Panel, PITEC, which includes 12,813 firms for the year 2008 and has been little used in this type of study. The estimation method used is Iteratively Reweighted Least Squares method, IRLS, which is very useful for obtaining robust estimations in the presence of outliers. The results confirm that innovation has a positive effect on productivity, especially in high-tech and large firms. The impact of externalities is more heterogeneous because, while intrasectoral externalities have a poitive and significant effect, especially in low-tech firms independently of size, intersectoral externalities have a more ambiguous effect, being clearly significant for advanced industries in which size has a positive effect.
Resumo:
The present work describes the development of a fast and robust analytical method for the determination of 53 antibiotic residues, covering various chemical groups and some of their metabolites, in environmental matrices that are considered important sources of antibiotic pollution, namely hospital and urban wastewaters, as well as in river waters. The method is based on automated off-line solid phase extraction (SPE) followed by ultra-high-performance liquid chromatography coupled to quadrupole linear ion trap tandem mass spectrometry (UHPLC–QqLIT). For unequivocal identification and confirmation, and in order to fulfill EU guidelines, two selected reaction monitoring (SRM) transitions per compound are monitored (the most intense one is used for quantification and the second one for confirmation). Quantification of target antibiotics is performed by the internal standard approach, using one isotopically labeled compound for each chemical group, in order to correct matrix effects. The main advantages of the method are automation and speed-up of sample preparation, by the reduction of extraction volumes for all matrices, the fast separation of a wide spectrum of antibiotics by using ultra-high-performance liquid chromatography, its sensitivity (limits of detection in the low ng/L range) and selectivity (due to the use of tandem mass spectrometry) The inclusion of β-lactam antibiotics (penicillins and cephalosporins), which are compounds difficult to analyze in multi-residue methods due to their instability in water matrices, and some antibiotics metabolites are other important benefits of the method developed. As part of the validation procedure, the method developed was applied to the analysis of antibiotics residues in hospital, urban influent and effluent wastewaters as well as in river water samples
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.