942 resultados para categorical and mix datasets
Resumo:
The principal aim of this paper is to measure the amount by which the profit of a multi-input, multi-output firm deviates from maximum short-run profit, and then to decompose this profit gap into components that are of practical use to managers. In particular, our interest is in the measurement of the contribution of unused capacity, along with measures of technical inefficiency, and allocative inefficiency, in this profit gap. We survey existing definitions of capacity and, after discussing their shortcomings, we propose a new ray economic capacity measure that involves short-run profit maximisation, with the output mix held constant. We go on to describe how the gap between observed profit and maximum profit can be calculated and decomposed using linear programming methods. The paper concludes with an empirical illustration, involving data on 28 international airline companies. The empirical results indicate that these airline companies achieve profit levels which are on average US$815m below potential levels, and that 70% of the gap may be attributed to unused capacity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Molecular evolution has been considered to be essentially a stochastic process, little influenced by the pace of phenotypic change. This assumption was challenged by a study that demonstrated an association between rates of morphological and molecular change estimated for total-evidence phylogenies, a finding that led some researchers to challenge molecular date estimates of major evolutionary radiations. Here we show that Omland's (1997) result is probably due to methodological bias, particularly phylogenetic nonindependence, rather than being indicative of an underlying evolutionary phenomenon. We apply three new methods specifically designed to overcome phylogenetic bias to 13 published phylogenetic datasets for vertebrate taxa, each of which includes both morphological characters and DNA sequence data. We find no evidence of an association between rates of molecular and morphological rates of change.
Resumo:
The mechanism underlying segregation in liquid fluidized beds is investigated in this paper, A binary fluidized bed system not at a stable equilibrium condition. is modelled in the literature as forming a mixed part-corresponding to stable mixture-at the bottom of the bed and a pure layer of excess components always floating on the mixed part. On the basis of this model: (0 comprehensive criteria for binary particles of any type to mix/segregate, and (ii) mixing, segregation regime map in terms of size ratio and density ratio of the particles for a given fluidizing medium, are established in this work. Therefore, knowing the properties of given particles, a second type of particles can be chosen in order to avoid or to promote segregation according to the particular process requirements. The model is then advanced for multicomponent fluidized beds and validated against experimental results observed for ternary fluidized beds. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.
Resumo:
Health is considered to be a fundamental human right and developing a better understanding of health is assumed to be a global social goal (Bloom, 1987). Yet many third-world countries and some subpopulations within developed countries do not enjoy a healthy existence. The research reported in this paper examined the conceptions of health and conceptions of illness for a group of Aboriginal, Torres Strait Islander, and Papua New Guinea university students studying health science courses. Results found three conceptions of health and three conceptions of illness that indicated these students held a mix of traditional cultural and Western beliefs. These findings may contribute to overcoming the dissonance between traditional and Western beliefs about health and the development of health care courses that are more specific to how these students understand health. This may also serve to improve the educational status of Aboriginal and Torres Strait Islander people and potentially improve the health status within these communities.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
This paper describes a process-based metapopulation dynamics and phenology model of prickly acacia, Acacia nilotica, an invasive alien species in Australia. The model, SPAnDX, describes the interactions between riparian and upland sub-populations of A. nilotica within livestock paddocks, including the effects of extrinsic factors such as temperature, soil moisture availability and atmospheric concentrations of carbon dioxide. The model includes the effects of management events such as changing the livestock species or stocking rate, applying fire, and herbicide application. The predicted population behaviour of A. nilotica was sensitive to climate. Using 35 years daily weather datasets for five representative sites spanning the range of conditions that A. nilotica is found in Australia, the model predicted biomass levels that closely accord with expected values at each site. SPAnDX can be used as a decision-support tool in integrated weed management, and to explore the sensitivity of cultural management practices to climate change throughout the range of A. nilotica. The cohort-based DYMEX modelling package used to build and run SPAnDX provided several advantages over more traditional population modelling approaches (e.g. an appropriate specific formalism (discrete time, cohort-based, process-oriented), user-friendly graphical environment, extensible library of reusable components, and useful and flexible input/output support framework). (C) 2003 Published by Elsevier Science B.V.
Resumo:
Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting.
Resumo:
This paper analyzes DNA information using entropy and phase plane concepts. First, the DNA code is converted into a numerical format by means of histograms that capture DNA sequence length ranging from one up to ten bases. This strategy measures dynamical evolutions from 4 up to 410 signal states. The resulting histograms are analyzed using three distinct entropy formulations namely the Shannon, Rényie and Tsallis definitions. Charts of entropy versus sequence length are applied to a set of twenty four species, characterizing 486 chromosomes. The information is synthesized and visualized by adapting phase plane concepts leading to a categorical representation of chromosomes and species.
Resumo:
This paper analyzes the DNA code of several species in the perspective of information content. For that purpose several concepts and mathematical tools are selected towards establishing a quantitative method without a priori distorting the alphabet represented by the sequence of DNA bases. The synergies of associating Gray code, histogram characterization and multidimensional scaling visualization lead to a collection of plots with a categorical representation of species and chromosomes.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
Background: A common task in analyzing microarray data is to determine which genes are differentially expressed across two (or more) kind of tissue samples or samples submitted under experimental conditions. Several statistical methods have been proposed to accomplish this goal, generally based on measures of distance between classes. It is well known that biological samples are heterogeneous because of factors such as molecular subtypes or genetic background that are often unknown to the experimenter. For instance, in experiments which involve molecular classification of tumors it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures. Consequently, there can be genes differentially expressed on sample subgroups which are missed if usual statistical approaches are used. In this paper we propose a new graphical tool which not only identifies genes with up and down regulations, but also genes with differential expression in different subclasses, that are usually missed if current statistical methods are used. This tool is based on two measures of distance between samples, namely the overlapping coefficient (OVL) between two densities and the area under the receiver operating characteristic (ROC) curve. The methodology proposed here was implemented in the open-source R software. Results: This method was applied to a publicly available dataset, as well as to a simulated dataset. We compared our results with the ones obtained using some of the standard methods for detecting differentially expressed genes, namely Welch t-statistic, fold change (FC), rank products (RP), average difference (AD), weighted average difference (WAD), moderated t-statistic (modT), intensity-based moderated t-statistic (ibmT), significance analysis of microarrays (samT) and area under the ROC curve (AUC). On both datasets all differentially expressed genes with bimodal or multimodal distributions were not selected by all standard selection procedures. We also compared our results with (i) area between ROC curve and rising area (ABCR) and (ii) the test for not proper ROC curves (TNRC). We found our methodology more comprehensive, because it detects both bimodal and multimodal distributions and different variances can be considered on both samples. Another advantage of our method is that we can analyze graphically the behavior of different kinds of differentially expressed genes. Conclusion: Our results indicate that the arrow plot represents a new flexible and useful tool for the analysis of gene expression profiles from microarrays.
Resumo:
This paper deals with the coupled effect of temperature and silica fume addition on rheological, mechanical behaviour and porosity of grouts based on CEMI 42.5R, proportioned with a polycarboxylate-based high range water reducer. Preliminary tests were conducted to focus on the grout best able to fill a fibrous network since the goal of this study was to develop an optimized grout able to be injected in a mat of steel fibers for concrete strengthening. The grout composition was developed based on criteria for fresh state and hardened state properties. For a CEMI 42.5R based grout different high range water reducer dosages (0%, 0.2%, 0.4%, 0.5%, 0.7%) and silica fume (SF) dosages (0%, 2%, 4%) were tested (as replacement of cement by mass). Rheological measurements were used to investigate the effect of polycarboxylates (PCEs) and SF dosage on grout properties, particularly its workability loss, as the mix was to be injected in a matrix of steel fibers for concrete jacketing. The workability behaviour was characterized by the rheological parameters yield stress and plastic viscosity (for different grout temperatures and resting times), as well as the procedures of mini slump cone and funnel flow time. Then, further development focused only on the best grout compositions. The cement substitution by 2% of SF exhibited the best overall behaviour and was considered as the most promising compared to the others compositions tested. Concerning the fresh state analysis, a significant workability loss was detected if grout temperature increased above 35 degrees C. Below this temperature the grout presented a self-levelling behaviour and a life time equal to 45 min. In the hardened state, silica fumes increased not only the grout's porosity but also the grout's compressive strength at later ages, since the pozzolanic contribution to the compressive strength does not occur until 28 d and beyond. (C) 2012 Elsevier Ltd. All rights reserved.
Expert opinion on best practice guidelines and competency framework for visual screening in children
Resumo:
PURPOSE: Screening programs to detect visual abnormalities in children vary among countries. The aim of this study is to describe experts' perception of best practice guidelines and competency framework for visual screening in children. METHODS: A qualitative focus group technique was applied during the Portuguese national orthoptic congress to obtain the perception of an expert panel of 5 orthoptists and 2 ophthalmologists with experience in visual screening for children (mean age 53.43 years, SD ± 9.40). The panel received in advance a script with the description of three tuning competencies dimensions (instrumental, systemic, and interpersonal) for visual screening. The session was recorded in video and audio. Qualitative data were analyzed using a categorical technique. RESULTS: According to experts' views, six tests (35.29%) have to be included in a visual screening: distance visual acuity test, cover test, bi-prism or 4/6(Δ) prism, fusion, ocular movements, and refraction. Screening should be performed according to the child age before and after 3 years of age (17.65%). The expert panel highlighted the influence of the professional experience in the application of a screening protocol (23.53%). They also showed concern about the false negatives control (23.53%). Instrumental competencies were the most cited (54.09%), followed by interpersonal (29.51%) and systemic (16.4%). CONCLUSIONS: Orthoptists should have professional experience before starting to apply a screening protocol. False negative results are a concern that has to be more thoroughly investigated. The proposed framework focuses on core competencies highlighted by the expert panel. Competencies programs could be important do develop better screening programs.
Resumo:
The species abundance distribution (SAD) has been a central focus of community ecology for over fifty years, and is currently the subject of widespread renewed interest. The gambin model has recently been proposed as a model that provides a superior fit to commonly preferred SAD models. It has also been argued that the model's single parameter (α) presents a potentially informative ecological diversity metric, because it summarises the shape of the SAD in a single number. Despite this potential, few empirical tests of the model have been undertaken, perhaps because the necessary methods and software for fitting the model have not existed. Here, we derive a maximum likelihood method to fit the model, and use it to undertake a comprehensive comparative analysis of the fit of the gambin model. The functions and computational code to fit the model are incorporated in a newly developed free-to-download R package (gambin). We test the gambin model using a variety of datasets and compare the fit of the gambin model to fits obtained using the Poisson lognormal, logseries and zero-sum multinomial distributions. We found that gambin almost universally provided a better fit to the data and that the fit was consistent for a variety of sample grain sizes. We demonstrate how α can be used to differentiate intelligibly between community structures of Azorean arthropods sampled in different land use types. We conclude that gambin presents a flexible model capable of fitting a wide variety of observed SAD data, while providing a useful index of SAD form in its single fitted parameter. As such, gambin has wide potential applicability in the study of SADs, and ecology more generally.