930 resultados para Methods engineering
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
Little consensus exists in the literature regarding methods for determination of the onset of electromyographic (EMG) activity. The aim of this study was to compare the relative accuracy of a range of computer-based techniques with respect to EMG onset determined visually by an experienced examiner. Twenty-seven methods were compared which varied in terms of EMG processing (low pass filtering at 10, 50 and 500 Hz), threshold value (1, 2 and 3 SD beyond mean of baseline activity) and the number of samples for which the mean must exceed the defined threshold (20, 50 and 100 ms). Three hundred randomly selected trials of a postural task were evaluated using each technique. The visual determination of EMG onset was found to be highly repeatable between days. Linear regression equations were calculated for the values selected by each computer method which indicated that the onset values selected by the majority of the parameter combinations deviated significantly from the visually derived onset values. Several methods accurately selected the time of onset of EMG activity and are recommended for future use. Copyright (C) 1996 Elsevier Science Ireland Ltd.
Resumo:
Our long-term objective is to devise reliable methods to generate biological replacement teeth exhibiting the physical properties and functions of naturally formed human teeth. Previously, we demonstrated the successful use of tissue engineering approaches to generate small, bioengineered tooth crowns from harvested pig and rat postnatal dental stem cells (DSCs). To facilitate characterizations of human DSCs, we have developed a novel radiographic staging system to accurately correlate human third molar tooth developmental stage with anticipated harvested DSC yield. Our results demonstrated that DSC yields were higher in less developed teeth (Stages 1 and 2), and lower in more developed teeth (Stages 3, 4, and 5). The greatest cell yields and colony-forming units (CFUs) capability was obtained from Stages 1 and 2 tooth dental pulp. We conclude that radiographic developmental staging can be used to accurately assess the utility of harvested human teeth for future dental tissue engineering applications.
Resumo:
The possibility of obtaining transplantable oral epithelia opens new perspectives for oral treatments. Most of them are surgical, resulting in mucosal failures. As reconstructive material this in vitro epithelia would be also useful for other parts of the human body. Many researchers still use controversial methods; therefore it was evaluated and compared the efficiency of the enzymatic and direct explant methods to obtain oral keratinocytes. To this project oral epithelia fragments were used. This work compared: time needed for cell obtainment, best cell amount, life-span and epithelia forming cell capacity. The results showed the possibility to obtain keratinocytes from a small oral fragment and we could verify the advantages and peculiar restrictions. We concluded that under our conditions the enzymatic method showed the best results: in the cells obtaining time needed, cell amount and life-span. Both methods showed the same capacity to form in vitro epithelia.
Resumo:
Problems associated with the stickiness of food in processing and storage practices along with its causative factors are outlined. Fundamental mechanisms that explain why and how food products become sticky are discussed. Methods currently in use for characterizing and overcoming stickiness problems in food processing and storage operations are described. The use of glass transition temperature-based model, which provides a rational basis for understanding and characterizing the stickiness of many food products, is highlighted.
Resumo:
This paper presents the comparison of surface diffusivities of hydrocarbons in activated carbon. The surface diffusivities are obtained from the analysis of kinetic data collected using three different kinetics methods- the constant molar flow, the differential adsorption bed and the differential permeation methods. In general the values of surface diffusivity obtained by these methods agree with each other, and it is found that the surface diffusivity increases very fast with loading. Such a fast increase can not be accounted for by a thermodynamic Darken factor, and the surface heterogeneity only partially accounts for the fast rise of surface diffusivity versus loading. Surface diffusivities of methane, ethane, propane, n-butane, n-hexane, benzene and ethanol on activated carbon are reported in this paper.
Resumo:
This article describes a new test method for the assessment of the severity of environmental stress cracking of biomedical polyurethanes in a manner that minimizes the degree of subjectivity involved. The effect of applied strain and acetone pre-treatment on degradation of Pellethane 2363 80A and Pellethane 2363 55D polyurethanes under in vitro and in vivo conditions is studied. The results are presented using a magnification-weighted image rating system that allows the semi-quantitative rating of degradation based on distribution and severity of surface damage. Devices for applying controlled strain to both flat sheet and tubing samples are described. The new rating system consistently discriminated between. the effects of acetone pre-treatments, strain and exposure times in both in vitro and in vivo experiments. As expected, P80A underwent considerable stress cracking compared with P55D. P80A produced similar stress crack ratings in both in vivo and in vitro experiments, however P55D performed worse under in vitro conditions compared with in vivo. This result indicated that care must be taken when interpreting in vitro results in the absence of in vivo data. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Low-micromolar concentrations of sulfite, thiosulfate and sulfide, present in synthetic wastewater or anaerobic digester effluent, were quantified by means of derivatization with monobromobimane, followed by HPLC separation with fluorescence detection. The concentration of elemental sulfur was determined, after its extraction with chloroform from the derivatized sample, by HPLC with UV detection. Recoveries of sulfide (both matrices), and of thiosulfate and sulfite (synthetic wastewater) were between 98 and 103%. The in-run RSDs on separate derivatizations were 13 and 19% for sulfite (two tests), between 1.5 and 6.6% for thiosulfate (two tests) and between 4.1 and 7.7% for sulfide (three tests). Response factors for derivatives of sulfide and thiosulfate, but not sulfite, were steady over a 13-month period during which 730 samples were analysed. Dithionate and tetrathionate did not seem to be detectable with this method. The distinctness of the elemental sulfur and the derivatizing-agent peaks was improved considerably by detecting elution at 297 instead of 263 nm. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Application of novel analytical and investigative methods such as fluorescence in situ hybridization, confocal laser scanning microscopy (CLSM), microelectrodes and advanced numerical simulation has led to new insights into micro-and macroscopic processes in bioreactors. However, the question is still open whether or not these new findings and the subsequent gain of knowledge are of significant practical relevance and if so, where and how. To find suitable answers it is necessary for engineers to know what can be expected by applying these modern analytical tools. Similarly, scientists could benefit significantly from an intensive dialogue with engineers in order to find out about practical problems and conditions existing in wastewater treatment systems. In this paper, an attempt is made to help bridge the gap between science and engineering in biological wastewater treatment. We provide an overview of recently developed methods in microbiology and in mathematical modeling and numerical simulation. A questionnaire is presented which may help generate a platform from which further technical and scientific developments can be accomplished. Both the paper and the questionnaire are aimed at encouraging scientists and engineers to enter into an intensive, mutually beneficial dialogue. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
In the context of electricity markets, transmission pricing is an important tool to achieve an efficient operation of the electricity system. The electricity market is influenced by several factors; however the transmission network management is one of the most important aspects, because the network is a natural monopoly. The transmission tariffs can help to regulate the market, for this reason transmission tariffs must follow strict criteria. This paper presents the following methods to tariff the use of transmission networks by electricity market players: Post-Stamp Method; MW-Mile Method Distribution Factors Methods; Tracing Methodology; Bialek’s Tracing Method and Locational Marginal Price. A nine bus transmission network is used to illustrate the application of the tariff methods.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.