948 resultados para Method of lines
Resumo:
Most studies on measures of transpiration of plants, especially woody fruit, relies on methods of heat supply in the trunk. This study aimed to calibrate the Thermal Dissipation Probe Method (TDP) to estimate the transpiration, study the effects of natural thermal gradients and determine the relation between outside diameter and area of xylem in 'Valencia' orange young plants. TDP were installed in 40 orange plants of 15 months old, planted in boxes of 500 L, in a greenhouse. It was tested the correction of the natural thermal differences (DTN) for the estimation based on two unheated probes. The area of the conductive section was related to the outside diameter of the stem by means of polynomial regression. The equation for estimation of sap flow was calibrated having as standard lysimeter measures of a representative plant. The angular coefficient of the equation for estimating sap flow was adjusted by minimizing the absolute deviation between the sap flow and daily transpiration measured by lysimeter. Based on these results, it was concluded that the method of TDP, adjusting the original calibration and correction of the DTN, was effective in transpiration assessment.
Resumo:
TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
The objectives of the present study were 1) to compare results obtained by the traditional manual method of measuring heart rate (HR) and heart rate response (HRR) to the Valsalva maneuver, standing and deep breathing, with those obtained using a computerized data analysis system attached to a standard electrocardiograph machine; 2) to standardize the responses of healthy subjects to cardiovascular tests, and 3) to evaluate the response to these tests in a group of patients with diabetes mellitus (DM). In all subjects (97 healthy and 143 with DM) we evaluated HRR to deep breathing, HRR to standing, HRR to the Valsalva maneuver, and blood pressure response (BPR) to standing up and to a sustained handgrip. Since there was a strong positive correlation between the results obtained with the computerized method and the traditional method, we conclude that the new method can replace the traditional manual method for evaluating cardiovascular responses with the advantages of speed and objectivity. HRR and BPR of men and women did not differ. A correlation between age and HRR was observed for standing (r = -0.48, P<0.001) and deep breathing (r = -0.41, P<0.002). Abnormal BPR to standing was usually observed only in diabetic patients with definite and severe degrees of autonomic neuropathy.
Resumo:
In the last decades, the chemical synthesis of short oligonucleotides has become an important aspect of study due to the discovery of new functions for nucleic acids such as antisense oligonucleotides (ASOs), aptamers, DNAzymes, microRNA (miRNA) and small interfering RNA (siRNA). The applications in modern therapies and fundamental medicine on the treatment of different cancer diseases, viral infections and genetic disorders has established the necessity to develop scalable methods for their cheaper and easier industrial manufacture. While small scale solid-phase oligonucleotide synthesis is the method of choice in the field, various challenges still remain associated with the production of short DNA and RNA-oligomers in very large quantities. On the other hand, solution phase synthesis of oligonucleotides offers a more predictable scaling-up of the synthesis and is amenable to standard industrial manufacture techniques. In the present thesis, various protocols for the synthesis of short DNA and RNA oligomers have been studied on a peracetylated and methylated β-cyclodextrin, and also on a pentaerythritol-derived support. On using the peracetylated and methylated β-cyclodextrin soluble supports, the coupling cycle was simplified by replacement of the typical 5′-O-(4,4′-dimethoxytrityl) protecting group with an acid-labile acetal-protected 5′-O-(1-methoxy-1-methylethyl) group, which upon acid-catalyzed methanolysis released easily removable volatile products. For this reason monomeric building blocks 5′-O-(1-methoxy-1-methylethyl) 3′-(2-cyano-ethyl-N,N-diisopropylphosphoramidite) were synthesized. Alternatively, on using the precipitative pentaerythritol support, novel 2´-O-(2-cyanoethyl)-5´-O-(1-methoxy-1-methylethyl) protected phosphoramidite building blocks for RNA synthesis have been prepared and their applicability by the synthesis of a pentamer was demonstrated. Similarly, a method for the preparation of short RNAs from commercially available 5´-O-(4,4´-dimethoxytrityl)-2´-O-(tert-butyldimethyl-silyl)ribonucleoside 3´-(2-cyanoethyl-N,N-diisopropylphosphoramidite) building blocks has been developed
Resumo:
Thermal cutting methods, are commonly used in the manufacture of metal parts. Thermal cutting processes separate materials by using heat. The process can be done with or without a stream of cutting oxygen. Common processes are Oxygen, plasma and laser cutting. It depends on the application and material which cutting method is used. Numerically-controlled thermal cutting is a cost-effective way of prefabricating components. One design aim is to minimize the number of work steps in order to increase competitiveness. This has resulted in the holes and openings in plate parts manufactured today being made using thermal cutting methods. This is a problem from the fatigue life perspective because there is local detail in the as-welded state that causes a rise in stress in a local area of the plate. In a case where the static utilization of a net section is full used, the calculated linear local stresses and stress ranges are often over 2 times the material yield strength. The shakedown criteria are exceeded. Fatigue life assessment of flame-cut details is commonly based on the nominal stress method. For welded details, design standards and instructions provide more accurate and flexible methods, e.g. a hot-spot method, but these methods are not universally applied to flame cut edges. Some of the fatigue tests of flame cut edges in the laboratory indicated that fatigue life estimations based on the standard nominal stress method can give quite a conservative fatigue life estimate in cases where a high notch factor was present. This is an undesirable phenomenon and it limits the potential for minimizing structure size and total costs. A new calculation method is introduced to improve the accuracy of the theoretical fatigue life prediction method of a flame cut edge with a high stress concentration factor. Simple equations were derived by using laboratory fatigue test results, which are published in this work. The proposed method is called the modified FAT method (FATmod). The method takes into account the residual stress state, surface quality, material strength class and true stress ratio in the critical place.
Resumo:
Gene therapy is predicated upon efficient gene transfer. While viral vectors are the method of choice for transformation efficiency, the immunogenicity and safety concerns remain problematic. Non-viral vectors, on the other hand, have shown high degrees of safety and are mostly non-immunogenic in nature. However, non-viral vectors usually suffer from low levels oftransformation efficiency and transgene expression. Thus, increasing transformation efficiency ofnon-viral vectors, in particular by calcium phosphate co-precipitation technique, is a way of generating a suitable vector for gene therapy and is the aim of this study. It is a long known fact that different cell lines have different transfection efficiencies regardless oftransfection methodology (Lin et a!., 1994). Using commonly available cell lines Madine-Darby Bovine Kidney (MDBK), HeLa and Human Embryonic Kidney (HEK-293), we have shown a decreasing trend ofDNase activity based on a plasmid digestion assay. From densitometry studies, as much as a 40% reduction in DNase activity was observed when comparing HEK-293 (least active) to MDBK (most active). Using various biochemical assays, it was determined that DNase y, in particular, was expressed more highly in MDBK cells than both HeLa and HEK-293. Upon cloning of the bovine DNase y gene, we utilized the sequence information to construct antisense expressing plasmids via both traditional antisense RNA (pASDGneoM) and siRNA (psiRNA-S4, psiRNA-S11 and psiRNA-S16). For the construction ofpASDGneoM, the 3' end of the DNase y was inserted in opposite orientation under a cytomegalovirus (CMV) promoter such that the expression ofRNA complementary to the DNase 2 ymRNA occurred. For siRNA plasmids, the sequence was screened to yield optimal short sequences for siRNA inhibition. The silencing ofbovine DNase y led to an increase in transfection efficiency based on traditional calcium phosphate co-precipitation technique; stable clones of siRNA-producing MDBK cell lines (psiRNA-S4 Bland psiRNA-S4 B4) both demol).strated 4-fold increases in transfection efficiency. Furthermore, serial transfection of antisense DNase y plasmid pASDGneoM and reporter pCMV-~ showed a maximum of 8-fold increase in transfection efficiency when the two separate transfections were carried out 4 hours apart (i.e. transfection ofpASDGneoM, separated by four hours, then transfection ofpCMV-~). Together, these results demonstrate the involvement ofDNase y in reducing transfection efficiency, at least by traditional calcium phosphate technique.
Resumo:
This qualitative research project uses a Deleuzo-Guattarian theoretical framework to address the question: “How are the politically oriented social forums in Gaia Online experienced as a continuum of overlapping of lines, including molar lines, lines of flight, and molecular lines?” Although smooth lines of flight may occur in Gaia, there are always mechanisms that work to re-territorialize them as more striated molar operations. Conversely, while more striated molar lines may be evident in Gaia, there are also smooth lines of flight that attempt to deterritorialize them as smooth space. Founded in 2003, Gaia is a virtual community in which members use 3D avatars to socialize with others, create content, and play games. Deleuze and Guattari (1987) have defined space with three systems: on one end is state-oriented static space, on the other end is nomadic fluid space, and situated in the middle is molecular space which contains both smooth and striating elements. While state-oriented striated space is based on routines, rules, and specifications, nomadic smooth space is flexible, always changing, and full of possibility. Some of the smoother operations that are evident in Gaia include becoming other, decentred communications, desire as resistance, and lines of flight. Some of the more striated operations include social reproduction of gender norms/expectations, capitalist mechanisms, violence and intolerance linked to categories and binaries (racism/sexism/ageism), the regulation of desire, and the organisation of bodies.
Resumo:
We o¤er an axiomatization of the serial cost-sharing method of Friedman and Moulin (1999). The key property in our axiom system is Group Demand Monotonicity, asking that when a group of agents raise their demands, not all of them should pay less.
Resumo:
Mann–Kendall non-parametric test was employed for observational trend detection of monthly, seasonal and annual precipitation of five meteorological subdivisions of Central Northeast India (CNE India) for different 30-year normal periods (NP) viz. 1889–1918 (NP1), 1919–1948 (NP2), 1949–1978 (NP3) and 1979–2008 (NP4). The trends of maximum and minimum temperatures were also investigated. The slopes of the trend lines were determined using the method of least square linear fitting. An application of Morelet wavelet analysis was done with monthly rainfall during June– September, total rainfall during monsoon season and annual rainfall to know the periodicity and to test the significance of periodicity using the power spectrum method. The inferences figure out from the analyses will be helpful to the policy managers, planners and agricultural scientists to work out irrigation and water management options under various possible climatic eventualities for the region. The long-term (1889–2008) mean annual rainfall of CNE India is 1,195.1 mm with a standard deviation of 134.1 mm and coefficient of variation of 11%. There is a significant decreasing trend of 4.6 mm/year for Jharkhand and 3.2 mm/day for CNE India. Since rice crop is the important kharif crop (May– October) in this region, the decreasing trend of rainfall during themonth of July may delay/affect the transplanting/vegetative phase of the crop, and assured irrigation is very much needed to tackle the drought situation. During themonth of December, all the meteorological subdivisions except Jharkhand show a significant decreasing trend of rainfall during recent normal period NP4. The decrease of rainfall during December may hamper sowing of wheat, which is the important rabi crop (November–March) in most parts of this region. Maximum temperature shows significant rising trend of 0.008°C/year (at 0.01 level) during monsoon season and 0.014°C/year (at 0.01 level) during post-monsoon season during the period 1914– 2003. The annual maximum temperature also shows significant increasing trend of 0.008°C/year (at 0.01 level) during the same period. Minimum temperature shows significant rising trend of 0.012°C/year (at 0.01 level) during postmonsoon season and significant falling trend of 0.002°C/year (at 0.05 level) during monsoon season. A significant 4– 8 years peak periodicity band has been noticed during September over Western UP, and 30–34 years periodicity has been observed during July over Bihar subdivision. However, as far as CNE India is concerned, no significant periodicity has been noticed in any of the time series.
Resumo:
Immortal cell lines have not yet been reported from Penaeus monodon, which delimits the prospects of investigating the associated viral pathogens especially white spot syndrome virus (WSSV). In this context, a method of developing primary hemocyte culture from this crustacean has been standardized by employing modified double strength Leibovitz-15 (L-15) growth medium supplemented with 2% glucose, MEM vitamins (1 ), tryptose phosphate broth (2.95 g l 1), 20% FBS, N-phenylthiourea (0.2 mM), 0.06 lgml 1 chloramphenicol, 100 lgml 1 streptomycin and 100 IU ml 1 penicillin and hemolymph drawn from shrimp grown under a bio-secured recirculating aquaculture system (RAS). In this medium the hemocytes remained viable up to 8 days. 5-Bromo-20-deoxyuridine (BrdU) labeling assay revealed its incorporation in 22 ± 7% of cells at 24 h. Susceptibility of the cells to WSSV was confirmed by immunofluoresence assay using a monoclonal antibody against 28 kDa envelope protein of WSSV. A convenient method for determining virus titer as MTT50/ml was standardized employing the primary hemocyte culture. Expression of viral genes and cellular immune genes were also investigated. The cell culture could be demonstrated for determining toxicity of a management chemical (benzalkonium chloride) by determining its IC50. The primary hemocyte culture could serve as a model for WSSV titration and viral and cellular immune related gene expression and also for investigations on cytotoxicity of aquaculture drugs and chemicals
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
LDL oxidation may be important in atherosclerosis. Extensive oxidation of LDL by copper induces increased uptake by macrophages, but results in decomposition of hydroperoxides, making it more difficult to investigate the effects of hydroperoxides in oxidised LDL on cell function. We describe here a simple method of oxidising LDL by dialysis against copper ions at 4 degrees C, which inhibits the decomposition of hydroperoxides, and allows the production of LDL rich in hydroperoxides (626 +/- 98 nmol/mg LDL protein) but low in oxysterols (3 +/- 1 nmol 7-ketocholesterol/mg LDL protein), whilst allowing sufficient modification (2.6 +/- 0.5 relative electrophoretic mobility) for rapid uptake by macrophages (5.49 +/- 0.75 mu g I-125-labelled hydroperoxide-rich LDL vs. 0.46 +/- 0.04 mu g protein/mg cell protein in 18 h for native LDL). By dialysing under the same conditions, but at 37 degrees C, the hydroperoxides are decomposed extensively and the LDL becomes rich in oxysterols. This novel method of oxidising LDL with high yield to either a hydroperoxide- or oxysterol-rich form by simply altering the temperature of dialysis may provide a useful tool for determining the effects of these different oxidation products on cell function. (C) 2007 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Cross-contamination between cell lines is a longstanding and frequent cause of scientific misrepresentation. Estimates from national testing services indicate that up to 36% of cell lines are of a different origin or species to that claimed. To test a standard method of cell line authentication, 253 human cell lines from banks and research institutes worldwide were analyzed by short tandem repeat profiling. The short tandem repeat profile is a simple numerical code that is reproducible between laboratories, is inexpensive, and can provide an international reference standard for every cell line. If DNA profiling of cell lines is accepted and demanded internationally, scientific misrepresentation because of cross-contamination can be largely eliminated.