924 resultados para Reasonable Lenght of Process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The retroarticular process is a bony prominence formed by the thickening of the lateral border of the mandibular fossa, forming the posterior wall of the temporomandibular joint. Since little is known and discussed about the retroarticular process, our aim was to study its presence, shape and size, relating these findings to the shape of the skulls according to the horizontal cephalic index. Materials and Methods: We used 400 dry human skulls of the Institute of Science and Technology - UNESP Anatomy Laboratory. Each skull was classified in brachycranics, mesocranics or dolichocranics, and then positioned on a craneostat to measure the height of the retroarticular process from its lower extremity to the auriculo-orbital plane. The width was obtained by measuring the base of the process on its longer lateral axis. Results: The retroarticular process was found bilaterally in 397 skulls (99.25%). All the processes were classified into the following shapes: pyramidal (35.55%), tubercular (31.78%), mammilar (20.73%), crest-like (9.05%) and molar shape (2.89%); 254 skulls (63.50%) showed the same type of process at the right and left sides (Kappa=0.496, moderate agreement). The average height and width were 5.28 mm and 12.81 mm, respectively. Conclusion: The retroarticular process was found in almost all the skulls examined. There are no significant evidences about the relationship among the presence, shape and size of the retroarticular process and the shape of the skulls according to the horizontal cephalic index. However, our findings led us to infer that there would be a functional relationship between the process and the temporomandibular joint.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: In Brazil part of the production of ginger is of inadequate quality for export. The production of spirit from felt-over rhizomes is an alternative of great interest to producers of these rhizomes. Aim: Aiming to increase the value of felt-over rhizomes, this work aimed to study the use of ginger as a raw material for alcoholic beverage production. It was evaluated the effect of fermentation conditions on the components of fermented alcoholic, as well as, the quality of alcoholic distilled beverage of ginger. Methods: Dehydrated ginger passed by enzymatic hydrolysis-saccharification processes. The hydrolysate obtained was analyzed for sugar profile in HPLC. The alcoholic fermentation process followed the central composite rotational design for three factors: fermentation temperature (23 to 37ºC), time of fermentation (17 to 33 h) and concentration of inoculum (0.22 to 3.00%). The fermented alcoholic obtained was analyzed in HPLC for the contents of ethanol, methanol, glycerol and residual sugars. The distillated alcoholic beverage of ginger was analyzed for ethanol, methanol, acetaldehyde, ethyl acetate and higher alcohols in the gas chromatography (GC). In addition, copper content and acidity were analyzed Results: Sugar profile of the ginger hydrolysate revealed the presence of 77.8% of glucose. Data analysis of fermentation process showed influence of temperature on ethanol and methanol content of the fermented alcoholic of ginger. Time of fermentation had effect on glycerol content. All parameters of process had influence on residual sugars contents. The HPLC analysis has shown presence of methanol, ethyl acetate, aldehyde, acids, higher alcohols and esters in distilled alcoholic beverage of ginger. Conclusion: Fermented alcoholic of ginger with higher levels of ethanol can be obtained under the conditions of 1.5% w/w of inoculum, 30°C of temperature and 24 hours of fermentation time. In this condition of fermentation process the beverage of ginger had good quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar reactors can be attractive in photodegradation processes due to lower electrical energy demand. The performance of a solar reactor for two flow configurations, i.e., plug flow and mixed flow, is compared based on experimental results with a pilot-scale solar reactor. Aqueous solutions of phenol were used as a model for industrial wastewater containing organic contaminants. Batch experiments were carried out under clear sky, resulting in removal rates in the range of 96100?%. The dissolved organic carbon removal rate was simulated by an empirical model based on neural networks, which was adjusted to the experimental data, resulting in a correlation coefficient of 0.9856. This approach enabled to estimate effects of process variables which could not be evaluated from the experiments. Simulations with different reactor configurations indicated relevant aspects for the design of solar reactors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study a novel method MicroJet reactor technology was developed to enable the custom preparation of nanoparticles. rnDanazol/HPMCP HP50 and Gliclazide/Eudragit S100 nanoparticles were used as model systems for the investigation of effects of process parameters and microjet reactor setup on the nanoparticle properties during the microjet reactor construction. rnFollowing the feasibility study of the microjet reactor system, three different nanoparticle formulations were prepared using fenofibrate as model drug. Fenofibrate nanoparticles stabilized with poloxamer 407 (FN), fenofibrate nanoparticles in hydroxypropyl methyl cellulose phthalate (HPMCP) matrix (FHN) and fenofibrate nanoparticles in HPMCP and chitosan matrix (FHCN) were prepared under controlled precipitation using MicroJet reactor technology. Particle sizes of all the nanoparticle formulations were adjusted to 200-250 nm. rnThe changes in the experimental parameters altered the system thermodynamics resulting in the production of nanoparticles between 20-1000 nm (PDI<0.2) with high drug loading efficiencies (96.5% in 20:1 polymer:drug ratio).rnDrug releases from all nanoparticle formulations were fast and complete after 15 minutes both in FaSSIF and FeSSIF medium whereas in mucodhesiveness tests, only FHCN formulation was found to be mucoadhesive. Results of the Caco-2 studies revealed that % dose absorbed values were significantly higher (p<0.01) for FHCN in both cases where FaSSIF and FeSSIF were used as transport buffer.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ASTM A529 carbon¿manganese steel angle specimens were joined by flash butt welding and the effects of varying process parameter settings on the resulting welds were investigated. The weld metal and heat affected zones were examined and tested using tensile testing, ultrasonic scanning, Rockwell hardness testing, optical microscopy, and scanning electron microscopy with energy dispersive spectroscopy in order to quantify the effect of process variables on weld quality. Statistical analysis of experimental tensile and ultrasonic scanning data highlighted the sensitivity of weld strength and the presence of weld zone inclusions and interfacial defects to the process factors of upset current, flashing time duration, and upset dimension. Subsequent microstructural analysis revealed various phases within the weld and heat affected zone, including acicular ferrite, Widmanstätten or side-plate ferrite, and grain boundary ferrite. Inspection of the fracture surfaces of multiple tensile specimens, with scanning electron microscopy, displayed evidence of brittle cleavage fracture within the weld zone for certain factor combinations. Test results also indicated that hardness was increased in the weld zone for all specimens, which can be attributed to the extensive deformation of the upset operation. The significance of weld process factor levels on microstructure, fracture characteristics, and weld zone strength was analyzed. The relationships between significant flash welding process variables and weld quality metrics as applied to ASTM A529-Grade 50 steel angle were formalized in empirical process models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diminishing crude oil and natural gas supplies, along with concern about greenhouse gas are major driving forces in the search for efficient renewable energy sources. The conversion of lignocellulosic biomass to energy and useful chemicals is a component of the solution. Ethanol is most commonly produced by enzymatic hydrolysis of complex carbohydrates to simple sugars followed by fermentation using yeast. C6Hl0O5 + H2O −Enxymes→ C6H12O6 −Yeast→ 2CH3CH2OH + 2C02 In the U.S. corn is the primary starting raw material for commercial ethanol production. However, there is insufficient corn available to meet the future demand for ethanol as a gasoline additive. Consequently a variety of processes are being developed for producing ethanol from biomass; among which is the NREL process for the production of ethanol from white hardwood. The objective of the thesis reported here was to perform a technical economic analysis of the hardwood to ethanol process. In this analysis a Greenfield plant was compared to co-locating the ethanol plant adjacent to a Kraft pulp mill. The advantage of the latter case is that facilities can be shared jointly for ethanol production and for the production of pulp. Preliminary process designs were performed for three cases; a base case size of 2205 dry tons/day of hardwood (52 million gallons of ethanol per year) as well as the two cases of half and double this size. The thermal efficiency of the NREL process was estimated to be approximately 36%; that is about 36% of the thermal energy in the wood is retained in the product ethanol and by-product electrical energy. The discounted cash flow rate of return on investment and the net present value methods of evaluating process alternatives were used to evaluate the economic feasibility of the NREL process. The minimum acceptable discounted cash flow rate of return after taxes was assumed to be 10%. In all of the process alternatives investigated, the dominant cost factors are the capital recovery charges and the cost of wood. The Greenfield NREL process is not economically viable with the cost of producing ethanol varying from $2.58 to $2.08/gallon for the half capacity and double capacity cases respectively. The co-location cases appear more promising due to reductions in capital costs. The most profitable co-location case resulted in a discounted cash flow rate of return improving from 8.5% for the half capacity case to 20.3% for the double capacity case. Due to economy of scale, the investments become more and more profitable as the size of the plant increases. This concept is limited by the amount of wood that can be delivered to the plant on a sustainable basis as well as the demand for ethanol within a reasonable distance of the plant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Ocean Drilling Program (ODP) Site 959 was drilled in the northern border of the Côte d'Ivoire-Ghana Ridge at a water depth of 2100 m. Pleistocene total thickness does not exceed 20 m. Winnowing processes resulted in a low accumulation rate and notable stratigraphic hiatuses. During the Late Pleistocene, bottom circulation was very active and controlled laminae deposition (contourites) which increased the concentration of glauconitic infillings of foraminifera, and of volcanic glass and blue-green grains more rarely, with one or several subordinate ferromagnesian silicates. Volcanic glass generally was X-ray amorphous and schematically classified as basic to intermediate (44-60% SiO2). Opal-A or opal-CT suggested the beginning of the palagonitisation process, and previous smectitic deposits may have been eroded mechanically. The blue-green grains presented two main types of mineralogic composition: (1) neoformed K, Fe-smectite associated with zeolite (like phillipsite) and unequal amounts of quartz and anorthite; (2) feldspathic grains dominated by albite but including quartz, volcanic glass and smectites as accessory components. They were more or less associated with the volcanic glass. On the basis of their chemical composition, the genetic relationship between the blue-green grains and the volcanic glass seemed to be obvious although some heterogeneous grains seemed to be primary ignimbrite and not the result of glass weathering. The most reasonable origin of these pyroclastic ejecta would be explosive events from the Cameroon Volcanic Ridge, especially from the Sao Thome and Principe Islands and Mount Cameroon area. This is supported both by grain geochemistry and the time of volcanic activity, i.e. Pleistocene. After westward wind transport (some 1200 km) and ash fall-out, the subsequent winnowing by bottom currents controlled the concentration of the volcanic grains previously disseminated inside the hemipelagic sediment. Palagonitisation, and especially phillipsite formation, may result from a relatively rapid reaction during burial diagenesis (<1 m.y.), in deep-sea deposits at relatively low sedimentation rate. However, it cannot be excluded that the weathering had begun widely on the Cameroon Ridge before the explosive event.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metamodels have proven be very useful when it comes to reducing the computational requirements of Evolutionary Algorithm-based optimization by acting as quick-solving surrogates for slow-solving fitness functions. The relationship between metamodel scope and objective function varies between applications, that is, in some cases the metamodel acts as a surrogate for the whole fitness function, whereas in other cases it replaces only a component of the fitness function. This paper presents a formalized qualitative process to evaluate a fitness function to determine the most suitable metamodel scope so as to increase the likelihood of calibrating a high-fidelity metamodel and hence obtain good optimization results in a reasonable amount of time. The process is applied to the risk-based optimization of water distribution systems; a very computationally-intensive problem for real-world systems. The process is validated with a simple case study (modified New York Tunnels) and the power of metamodelling is demonstrated on a real-world case study (Pacific City) with a computational speed-up of several orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important aspect of Process Simulators for photovoltaics is prediction of defect evolution during device fabrication. Over the last twenty years, these tools have accelerated process optimization, and several Process Simulators for iron, a ubiquitous and deleterious impurity in silicon, have been developed. The diversity of these tools can make it difficult to build intuition about the physics governing iron behavior during processing. Thus, in one unified software environment and using self-consistent terminology, we combine and describe three of these Simulators. We vary structural defect distribution and iron precipitation equations to create eight distinct Models, which we then use to simulate different stages of processing. We find that the structural defect distribution influences the final interstitial iron concentration ([Fe-i]) more strongly than the iron precipitation equations. We identify two regimes of iron behavior: (1) diffusivity-limited, in which iron evolution is kinetically limited and bulk [Fe-i] predictions can vary by an order of magnitude or more, and (2) solubility-limited, in which iron evolution is near thermodynamic equilibrium and the Models yield similar results. This rigorous analysis provides new intuition that can inform Process Simulation, material, and process development, and it enables scientists and engineers to choose an appropriate level of Model complexity based on wafer type and quality, processing conditions, and available computation time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unripe banana flour (UBF) production employs bananas not submitted to maturation process, is an interesting alternative to minimize the fruit loss reduction related to inappropriate handling or fast ripening. The UBF is considered as a functional ingredient improving glycemic and plasma insulin levels in blood, have also shown efficacy on the control of satiety, insulin resistance. The aim of this work was to study the drying process of unripe banana slabs (Musa cavendishii, Nanicão) developing a transient drying model through mathematical modeling with simultaneous moisture and heat transfer. The raw material characterization was performed and afterwards the drying process was conducted at 40 ºC, 50 ºC e 60 ºC, the product temperature was recorded using thermocouples, the air velocity inside the chamber was 4 m·s-1. With the experimental data was possible to validate the diffusion model based on the Fick\'s second law and Fourier. For this purpose, the sorption isotherms were measured and fitted to the GAB model estimating the equilibrium moisture content (Xe), 1.76 [g H2O/100g d.b.] at 60 ºC and 10 % of relative humidity (RH), the thermophysical properties (k, Cp, ?) were also measured to be used in the model. Five cases were contemplated: i) Constant thermophysical properties; ii) Variable properties; iii) Mass (hm), heat transfer (h) coefficient and effective diffusivity (De) estimation 134 W·m-2·K-1, 4.91x10-5 m-2·s-1 and 3.278?10-10 m·s-2 at 60 ºC, respectively; iv) Variable De, it presented a third order polynomial behavior as function of moisture content; v) The shrinkage had an effect on the mathematical model, especially in the 3 first hours of process, the thickness experienced a contraction of about (30.34 ± 1.29) % out of the initial thickness, finding two decreasing drying rate periods (DDR I and DDR II), 3.28x10-10 m·s-2 and 1.77x10-10 m·s-2, respectively. COMSOL Multiphysics simulations were possible to perform through the heat and mass transfer coefficient estimated by the mathematical modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work we study the hydroxide activation (NaOH and KOH) of phenol-formaldehyde resin derived CNFs prepared by a polymer blend technique to prepare highly porous activated carbon nanofibres (ACNFs). Morphology and textural characteristics of these ACNFs were studied and their hydrogen storage capacities at 77 K (at 0.1 MPa and at high pressures up to 4 MPa) were assessed, and compared, with reported capacities of other porous carbon materials. Phenol-formaldehyde resin derived carbon fibres were successfully activated with these two alkaline hydroxides rendering highly microporous ACNFs with reasonable good activation process yields up to 47 wt.% compared to 7 wt.% yields from steam activation for similar surface areas of 1500 m2/g or higher. These nano-sized activated carbons present interesting H2 storage capacities at 77 K which are comparable, or even higher, to other high quality microporous carbon materials. This observation is due, in part, to their nano-sized diameters allowing to enhance their packing densities to 0.71 g/cm3 and hence their resulting hydrogen storage capacities.