895 resultados para Process Re-engineering
Resumo:
This volume contains the Proceedings of the Twenty-Sixth Annual Biochemical Engineering Symposium held at Kansas State University on September 21, 1996. The program included 10 oral presentations and 14 posters. Some of the papers describe the progress of ongoing projects, and others contain the results of completed projects. Only brief summaries are given of some of the papers; many of the papers will be published in full elsewhere. A listing of those who attended is given below. ContentsForeign Protein Production from SV40 Early Promoter in Continuous Cultures of Recombinant CHO Cells - Gautam Banik, Paul Todd, and Dhinakar Kampala Enhanced Cell Recruitment Due to Cell-Cell Interactions - Brad Farlow and Matthias Nollert The Recirculation of Hybridoma Suspension Cultures: Effects on Cell Death, Metabolism and Mab Productivity - Peng Jin and Carole A. Heath The Importance of Enzyme Inactivation and Self-Recovery in Cometabolic Biodegradation of Chlorinated Solvents - Xi-Hui Zhang, Shanka Banerji, and Rakesh Bajpai Phytoremediation of VOC contaminated Groundwater using Poplar Trees - Melissa Miller, Jason Dana, L.C. Davis, Murlidharan Narayanan, and L.E. Erickson Biological Treatment of Off-Gases from Aluminum Can Production: Experimental Results and Mathematical Modeling - Adeyma Y. Arroyo, Julio Zimbron, and Kenneth F. Reardon Inertial Migration Based Separation of Chlorella Microalgae in Branched Tubes - N.M. Poflee, A.L. Rakow, D.S. Dandy, M.L. Chappell, and M.N. Pons Contribution of Electrochemical Charge to Protein Partitioning in Aqueous Two-Phase Systems - Weiyu Fan and Charles C. Glatz Biodegradation of Some Commercial Surfactants Used in Bioremediation - Jun Gu, G.W. Preckshot, S.K. Banerji, and Rakesh Bajpai Modeling the Role of Biomass in Heavy Metal Transport Ln Vadose Zone - K.V. Nedunuri, L.E. Erickson, and R.S. Govindaraju Multivariable Statistical Methods for Monitoring Process Quality: Application to Bioinsecticide Production by 73 89 Bacillus Thuringiensis - c. Puente and M.N. Karim The Use of Polymeric Flocculants in Bacterial Lysate Streams - H. Graham, A.S. Cibulskas and E.H. Dunlop Effect of Water Content on transport of Trichloroethylene in a Chamber with Alfalfa Plants - Muralidharan Narayanan, Jiang Hu, Lawrence C. Davis, and Larry E. Erickson Detection of Specific Microorganisms using the Arbitrary Primed PCR in the Bacterial Community of Vegetated Soil - X. Wu and L.C. Davis Flux Enhancement Using Backpulsing - V.T. Kuberkar and R.H. Davis Chromatographic Purification of Oligonucleotides: Comparison with Electrophoresis - Stephen P. Cape, Ching-Yuan Lee, Kevin Petrini, Sean Foree, Micheal G. Sportiello and Paul Todd Determining Singular Arc Control Policies for Bioreactor Systems Using a Modified Iterative Dynamic Programming Algorithm - Arun Tholudur and W. Fred Ramirez Pressure Effect on Subtilisins Measured via FTIR, EPR and Activity Assays, and Its Impact on Crystallizations - J.N. Webb, R.Y. Waghmare, M.G. Bindewald, T.W. Randolph, J.F. Carpenter, C.E. Glatz Intercellular Calcium Changes in Endothelial Cells Exposed to Flow - Laura Worthen and Matthias Nollert Application of Liquid-Liquid Extraction in Propionic Acid Fermentation - Zhong Gu, Bonita A. Glatz, and Charles E. Glatz Purification of Recombinant T4 Lysozyme from E. Coli: Ion-Exchange Chromatography - Weiyu Fan, Matt L. Thatcher, and Charles E. Glatz Recovery and Purification of Recombinant Beta-Glucuronidase from Transgenic Corn - Ann R. Kusnadi, Roque Evangelista, Zivko L. Nikolov, and John Howard Effects of Auxins and cytokinins on Formation of Catharanthus Roseus G. Don Multiple Shoots - Ying-Jin Yuan, Yu-Min Yang, Tsung-Ting Hu, and Jiang Hu Fate and Effect of Trichloroethylene as Nonaqueous Phase Liquid in Chambers with Alfalfa - Qizhi Zhang, Brent Goplen, Sara Vanderhoof, Lawrence c. Davis, and Larry E. Erickson Oxygen Transport and Mixing Considerations for Microcarrier Culture of Mammalian Cells in an Airlift Reactor - Sridhar Sunderam, Frederick R. Souder, and Marylee Southard Effects of Cyclic Shear Stress on Mammalian Cells under Laminar Flow Conditions: Apparatus and Methods - M.L. Rigney, M.H. Liew, and M.Z. Southard
Resumo:
In Chile, small-scale farmers are classified according to old approaches from 1993 that do not include changes occurred in the last two decades. Maule is the region with most rural population in Chile which represents a significant stratum for development, innovation and competitiveness. This study explores a new approach of small-scale farmers -associated with Family Farm Agriculture (AFC) - classification in Chile and it describes a commercial profile or AFC-1 for famers of the Maule Region. A Cluster analysis to determine AFC-1 farmers is used. The analysis includes four association variables: Total Assets, Farm Income, Production Costs and Management Indicators. The results suggest that 16.4% of the farmers have a commercial profile and they could stay out support provided by the National Institute for Agricultural Development (INDAP). This group of farmers would not belong to AFC in short terms. This fact could bring restriction to AFC-1 farmers such as lack of credit access, less investment incentives and technical assistance. Thus, it would expect low process of technology adoption and welfare improvement. New agrarian policies must be warranted to support this important group of famers with a commercial profile.
Resumo:
En el siguiente artículo pretendemos dar cuenta de algunos aspectos que contribuyen a pensar la forma de constitución de un campo pedagógico. Proponemos hacer esto mediante un recorrido que permita explicitar las diferentes formas de relación entre dos disciplinas que toman a la educación como objeto: la sociología y la pedagogía. Para esto vamos a realizar un análisis histórico que va a tomar como referencia tres momentos en el desarrollo de las investigaciones en educación a lo largo del siglo XX. En cada uno de estos momentos podremos ver formas de articulación diferente entre las disciplinas. Posteriormente, la idea es avanzar en el análisis de cómo se produjo este proceso en el caso del Uruguay, analizando finalmente cómo esta articulación se trasunta en el diseño de dos políticas educativas: Las Escuelas de Tiempo Completo (ETC) y el Programa de Maestros Comunitarios (PMC).
Resumo:
En el siguiente artículo pretendemos dar cuenta de algunos aspectos que contribuyen a pensar la forma de constitución de un campo pedagógico. Proponemos hacer esto mediante un recorrido que permita explicitar las diferentes formas de relación entre dos disciplinas que toman a la educación como objeto: la sociología y la pedagogía. Para esto vamos a realizar un análisis histórico que va a tomar como referencia tres momentos en el desarrollo de las investigaciones en educación a lo largo del siglo XX. En cada uno de estos momentos podremos ver formas de articulación diferente entre las disciplinas. Posteriormente, la idea es avanzar en el análisis de cómo se produjo este proceso en el caso del Uruguay, analizando finalmente cómo esta articulación se trasunta en el diseño de dos políticas educativas: Las Escuelas de Tiempo Completo (ETC) y el Programa de Maestros Comunitarios (PMC).
Resumo:
En el siguiente artículo pretendemos dar cuenta de algunos aspectos que contribuyen a pensar la forma de constitución de un campo pedagógico. Proponemos hacer esto mediante un recorrido que permita explicitar las diferentes formas de relación entre dos disciplinas que toman a la educación como objeto: la sociología y la pedagogía. Para esto vamos a realizar un análisis histórico que va a tomar como referencia tres momentos en el desarrollo de las investigaciones en educación a lo largo del siglo XX. En cada uno de estos momentos podremos ver formas de articulación diferente entre las disciplinas. Posteriormente, la idea es avanzar en el análisis de cómo se produjo este proceso en el caso del Uruguay, analizando finalmente cómo esta articulación se trasunta en el diseño de dos políticas educativas: Las Escuelas de Tiempo Completo (ETC) y el Programa de Maestros Comunitarios (PMC).
Resumo:
Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
Interviews are the most widely used elicitation technique in Requirements Engineering (RE). Despite its importance, research in interviews is quite limited, in particular from an experimental perspective. We have performed a series of experiments exploring the relative effectiveness of structured and unstructured interviews. This line of research has been active in Information Systems in the past years, so that our experiments can be aggregated together with existing ones to obtain guidelines for practice. Experimental aggregation is a demanding task. It requires not only a large number of experiments, but also considering the influence of the existing moderators. However, in the current state of the practice in RE, those moderators are unknown. We believe that analyzing the threats to validity in interviewing experiments may give insight about how to improve further replications and the corresponding aggregations. It is likely that this strategy may be applied in other Software Engineering areas as well.
Resumo:
Semantic technologies have become widely adopted in recent years, and choosing the right technologies for the problems that users face is often a difficult task. This paper presents an application of the Analytic Network Process for the recommendation of semantic technologies, which is based on a quality model for semantic technologies. Instead of relying on expert-based comparisons of alternatives, the comparisons in our framework depend on real evaluation results. Furthermore, the recommendations in our framework derive from user quality requirements, which leads to better recommendations tailored to users’ needs. This paper also presents an algorithm for pairwise comparisons, which is based on user quality requirements and evaluation results.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
In a degree course such as Forestry Engineering, the general teaching objectives consist of explaining and helping students to understand the principles of Mechanics. For some time now we have encountered significant difficulties in teaching this subject due to the students' lack of motivation and to their insufficient prior preparation for the topic. If we add to this the discipline's inherent complexity and the students' preconceptions about the subject, these teaching difficulties become considerable. For this reason a series of didactic activities have been introduced sequentially in the teaching of this subject. This work describes the methodology, procedure and results for the action of developing a work project in groups using Descartes software. The results of this experiment can be considered very positive. Some of the critical preconceptions for learning the subject can be corrected, and the tutoring process in the classroom contributes to an improvement in teacherstudent communication. Since this scheme was established, the number of students taking part each academic year has increased, and this is the group with the greatest percentage of passing scores.
Resumo:
A high productivity rate in Engineering is related to an efficient management of the flow of the large quantities of information and associated decision making activities that are consubstantial to the Engineering processes both in design and production contexts. Dealing with such problems from an integrated point of view and mimicking real scenarios is not given much attention in Engineering degrees. In the context of Engineering Education, there are a number of courses designed for developing specific competencies, as required by the academic curricula, but not that many in which integration competencies are the main target. In this paper, a course devoted to that aim is discussed. The course is taught in a Marine Engineering degree but the philosophy could be used in any Engineering field. All the lessons are given in a computer room in which every student can use each all the treated software applications. The first part of the course is dedicated to Project Management: the students acquire skills in defining, using Ms-PROJECT, the work breakdown structure (WBS), and the organization breakdown structure (OBS) in Engineering projects, through a series of examples of increasing complexity, ending up with the case of vessel construction. The second part of the course is dedicated to the use of a database manager, Ms-ACCESS, for managing production related information. A series of increasing complexity examples is treated ending up with the management of the pipe database of a real vessel. This database consists of a few thousand of pipes, for which a production timing frame is defined, which connects this part of the course with the first one. Finally, the third part of the course is devoted to the work with FORAN, an Engineering Production package of widespread use in the shipbuilding industry. With this package, the frames and plates where all the outfitting will be carried out are defined through cooperative work by the studens, working simultaneously in the same 3D model. In the paper, specific details about the learning process are given. Surveys have been posed to the students in order to get feed-back from their experience as well as to assess their satisfaction with the learning process. Results from these surveys are discussed in the paper
Resumo:
The coagulation of milk is the fundamental process in cheese-making, based on a gel formation as consequence of physicochemical changes taking place in the casein micelles, the monitoring the whole process of milk curd formation is a constant preoccupation for dairy researchers and cheese companies (Lagaude et al., 2004). In addition to advances in composition-based applications of near infrared spectroscopy (NIRS), innovative uses of this technology are pursuing dynamic applications that show promise, especially in regard to tracking a sample in situ during food processing (Bock and Connelly, 2008). In this way the literature describes cheese making process applications of NIRS for curd cutting time determination, which conclude that NIRS would be a suitable method of monitoring milk coagulation, as shown i.e. the works published by Fagan et al. (Fagan et al., 2008; Fagan et al., 2007), based in the use of the commercial CoAguLite probe (with a LED at 880nm and a photodetector for light reflectance detection).
Resumo:
Around ten years ago investigation of technical and material construction in Ancient Roma has advanced in favour to obtain positive results. This process has been directed to obtaining some dates based in chemical composition, also action and reaction of materials against meteorological assaults or post depositional displacements. Plenty of these dates should be interpreted as a result of deterioration and damage in concrete material made in one landscape with some kind of meteorological characteristics. Concrete mixture like calcium and gypsum mortars should be analysed in laboratory test programs, and not only with descriptions based in reference books of Strabo, Pliny the Elder or Vitruvius. Roman manufacture was determined by weather condition, landscape, natural resources and of course, economic situation of the owner. In any case we must research the work in every facts of construction. On the one hand, thanks to chemical techniques like X-ray diffraction and Optical microscopy, we could know the granular disposition of mixture. On the other hand if we develop physical and mechanical techniques like compressive strength, capillary absorption on contact or water behaviour, we could know the reactions in binder and aggregates against weather effects. However we must be capable of interpret these results. Last year many analyses developed in archaeological sites in Spain has contributed to obtain different point of view, so has provide new dates to manage one method to continue the investigation of roman mortars. If we developed chemical and physical analysis in roman mortars at the same time, and we are capable to interpret the construction and the resources used, we achieve to understand the process of construction, the date and also the way of restoration in future.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based