997 resultados para Parameter Determination


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The experiments observe and measure the length of the annular regime in fully condensing quasi-steady (steady-in-the-mean) flows of pure FC-72 vapor in a horizontal condenser (rectangular cross-section of 2 mm height, 15 mm width, and 1 m length). The sides and top of the duct are made of clear plastic that allows flow visualization. The experimental system in which this condenser is used is able to control and achieve different quasi-steady mass flow rates, inlet pressures, and wall cooling conditions (by adjustment of the temperature and flow rate of the cooling water flowing underneath the condensing-plate). The reported correlations and measurements for the annular length are also vital information for determining the length of the annular regime and proposing extended correlation (covering many vapors and a larger parameter set than the experimentally reported version here) by ongoing independent modeling and computational simulation approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: Fourier transform infrared spectroscopy (FTIRS) can provide detailed information on organic and minerogenic constituents of sediment records. Based on a large number of sediment samples of varying age (0�340 000 yrs) and from very diverse lake settings in Antarctica, Argentina, Canada, Macedonia/Albania, Siberia, and Sweden, we have developed universally applicable calibration models for the quantitative determination of biogenic silica (BSi; n = 816), total inorganic carbon (TIC; n = 879), and total organic carbon (TOC; n = 3164) using FTIRS. These models are based on the differential absorbance of infrared radiation at specific wavelengths with varying concentrations of individual parameters, due to molecular vibrations associated with each parameter. The calibration models have low prediction errors and the predicted values are highly correlated with conventionally measured values (R = 0.94�0.99). Robustness tests indicate the accuracy of the newly developed FTIRS calibration models is similar to that of conventional geochemical analyses. Consequently FTIRS offers a useful and rapid alternative to conventional analyses for the quantitative determination of BSi, TIC, and TOC. The rapidity, cost-effectiveness, and small sample size required enables FTIRS determination of geochemical properties to be undertaken at higher resolutions than would otherwise be possible with the same resource allocation, thus providing crucial sedimentological information for climatic and environmental reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meindl et al. (Adv Space Res 51(7):1047–1064, 2013) showed that the geocenter z -component estimated from observations of global navigation satellite systems (GNSS) is strongly correlated to a particular parameter of the solar radiation pressure (SRP) model developed by Beutler et al. (Manuscr Geod 19:367–386, 1994). They analyzed the forces caused by SRP and the impact on the satellites’ orbits. The authors achieved their results using perturbation theory and celestial mechanics. Rebischung et al. (J Geod doi:10.1016/j.asr.2012.10.026, 2013) also deal with the geocenter determination with GNSS. The authors carried out a collinearity diagnosis of the associated parameter estimation problem. They conclude “without much exaggerating that current GNSS are insensitive to any component of geocenter motion”. They explain this inability by the high degree of collinearity of the geocenter coordinates mainly with satellite clock corrections. Based on these results and additional experiments, they state that the conclusions drawn by Meindl et al. (Adv Space Res 51(7):1047–1064, 2013) are questionable. We do not agree with these conclusions and present our arguments in this article. In the first part, we review and highlight the main characteristics of the studies performed by Meindl et al. (Adv Space Res 51(7):1047–1064, 2013) to show that the experiments are quite different from those performed by Rebischung et al. (J Geod doi:10.1016/j.asr.2012.10.026,2013) . In the second part, we show that normal equation (NEQ) systems are regular when estimating geocenter coordinates, implying that the covariance matrices associated with the NEQ systems may be used to assess the sensitivity to geocenter coordinates in a standard way. The sensitivity of GNSS to the components of the geocenter is discussed. Finally, we comment on the arguments raised by Rebischung et al. (J Geod doi:10.1016/j.asr.2012.10.026, 2013) against the results of Meindl et al. (Adv Space Res 51(7):1047–1064, 2013).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Interstellar Boundary Explorer (IBEX) samples the interstellar neutral (ISN) gas flow of several species every year from December through late March when the Earth moves into the incoming flow. The first quantitative analyses of these data resulted in a narrow tube in four-dimensional interstellar parameter space, which couples speed, flow latitude, flow longitude, and temperature, and center values with approximately 3° larger longitude and 3 km s⁻¹ lower speed, but with temperatures similar to those obtained from observations by the Ulysses spacecraft. IBEX has now recorded six years of ISN flow observations, providing a large database over increasing solar activity and using varying viewing strategies. In this paper, we evaluate systematic effects that are important for the ISN flow vector and temperature determination. We find that all models in use return ISN parameters well within the observational uncertainties and that the derived ISN flow direction is resilient against uncertainties in the ionization rate. We establish observationally an effective IBEX-Lo pointing uncertainty of ±0°18 in spin angle and confirm an uncertainty of ±0°1 in longitude. We also show that the IBEX viewing strategy with different spin-axis orientations minimizes the impact of several systematic uncertainties, and thus improves the robustness of the measurement. The Helium Warm Breeze has likely contributed substantially to the somewhat different center values of the ISN flow vector. By separating the flow vector and temperature determination, we can mitigate these effects on the analysis, which returns an ISN flow vector very close to the Ulysses results, but with a substantially higher temperature. Due to coupling with the ISN flow speed along the ISN parameter tube, we provide the temperature Tvisn∞=8710+440/-680 K for Visn∞=26 km s⁻¹ for comparison, where most of the uncertainty is systematic and likely due to the presence of the Warm Breeze.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drilling at site 207 (DSDP Leg 21), located on the broad summit of the Lord Howe Rise, bottomed in rhyolitic rocks. Sanidine concentrates from four samples of the rhyolite were dated by the 40Ar/39Ar total fusion method and conventional K-Ar method, and yielded concordant ages of 93.7 +/- 1.1 my, equivalent to the early part of the Upper Cretaceous. At this time the Lord Howe Rise, which has continental-type structure, is thought to have been emergent and adjacent to the eastern margin of the Australian-antarctic continent. Subsequent to 94 my ago and prior to deposition of Maastrichtian (70-65 myBP) marine sediments on top of the rhyolitic basement of the Lord Howe Rise, rifting occurred and the formation of the Tasman Basin began by sea-floor spreading with rotation of the Rise away from the margin of Australia. Subsidence of the Rise continued until Early Eocene (about 50 myBP), probably marking the end of sea-floor spreading in the Tasman Basin. These large scale movements relate to the breakup of this part of Gondwanaland in the Upper Cretaceous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the photovoltaic field, the back contact solar cells technology has appeared as an alternative to the traditional silicon modules. This new type of cells places both positive and negative contacts on the back side of the cells maximizing the exposed surface to the light and making easier the interconnection of the cells in the module. The Emitter Wrap-Through solar cell structure presents thousands of tiny holes to wrap the emitter from the front surface to the rear surface. These holes are made in a first step over the silicon wafers by means of a laser drilling process. This step is quite harmful from a mechanical point of view since holes act as stress concentrators leading to a reduction in the strength of these wafers. This paper presents the results of the strength characterization of drilled wafers. The study is carried out testing the samples with the ring on ring device. Finite Element models are developed to simulate the tests. The stress concentration factor of the drilled wafers under this load conditions is determined from the FE analysis. Moreover, the material strength is characterized fitting the fracture stress of the samples to a three-parameter Weibull cumulative distribution function. The parameters obtained are compared with the ones obtained in the analysis of a set of samples without holes to validate the method employed for the study of the strength of silicon drilled wafers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the 3-D equations of linear elasticity and the asylllptotic expansion methods in terms of powers of the beam cross-section area as small parameter different beam theories can be obtained, according to the last term kept in the expansion. If it is used only the first two terms of the asymptotic expansion the classical beam theories can be recovered without resort to any "a priori" additional hypotheses. Moreover, some small corrections and extensions of the classical beam theories can be found and also there exists the possibility to use the asymptotic general beam theory as a basis procedure for a straightforward derivation of the stiffness matrix and the equivalent nodal forces of the beam. In order to obtain the above results a set of functions and constants only dependent on the cross-section of the beam it has to be computed them as solutions of different 2-D laplacian boundary value problems over the beam cross section domain. In this paper two main numerical procedures to solve these boundary value pf'oblems have been discussed, namely the Boundary Element Method (BEM) and the Finite Element Method (FEM). Results for some regular and geometrically simple cross-sections are presented and compared with ones computed analytically. Extensions to other arbitrary cross-sections are illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of reliable solute transport parameters is an essential aspect for the characterization of the mechanisms and processes involved in solute transport (e.g., pesticides, fertilizers, contaminants) through the unsaturated zone. A rapid inexpensive method to estimate the dispersivity parameter at the field scale is presented herein. It is based on the quantification by the X-ray fluorescence solid-state technique of total bromine in soil, along with an inverse numerical modeling approach. The results show that this methodology is a good alternative to the classic Br− determination in soil water by ion chromatography. A good agreement between the observed and simulated total soil Br is reported. The results highlight the potential applicability of both combined techniques to infer readily solute transport parameters under field conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, densities of newly built roadways are checked by direct sampling (cores) or by nuclear density gauge measurements. For roadway engineers, density of asphalt pavement surfaces is essential to determine pavement quality. Unfortunately, field measurements of density by direct sampling or by nuclear measurement are slow processes. Therefore, I have explored the use of rapidly-deployed ground penetrating radar (GPR) as an alternative means of determining pavement quality. The dielectric constant of pavement surface may be a substructure parameter that correlates with pavement density, and can be used as a proxy when density of asphalt is not known from nuclear or destructive methods. The dielectric constant of the asphalt can be determined using ground penetrating radar (GPR). In order to use GPR for evaluation of road surface quality, the relationship between dielectric constants of asphalt and their densities must be established. Field measurements of GPR were taken at four highway sites in Houghton and Keweenaw Counties, Michigan, where density values were also obtained using nuclear methods in the field. Laboratory studies involved asphalt samples taken from the field sites and samples created in the laboratory. These were tested in various ways, including, density, thickness, and time domain reflectometry (TDR). In the field, GPR data was acquired using a 1000 MHz air-launched unit and a ground-coupled unit at 200 and 500 MHz. The equipment used was owned and operated by the Michigan Department of Transportation (MDOT) and available for this study for a total of four days during summer 2005 and spring 2006. The analysis of the reflected waveforms included “routine” processing for velocity using commercial software and direct evaluation of reflection coefficients to determine a dielectric constant. The dielectric constants computed from velocities do not agree well with those obtained from reflection coefficients. Perhaps due to the limited range of asphalt types studied, no correlation between density and dielectric constant was evident. Laboratory measurements were taken with samples removed from the field and samples created for this study. Samples from the field were studied using TDR, in order to obtain dielectric constant directly, and these correlated well with the estimates made from reflection coefficients. Samples created in the laboratory were measured using 1000 MHz air-launched GPR, and 400 MHz ground-coupled GPR, each under both wet and dry conditions. On the basis of these observations, I conclude that dielectric constant of asphalt can be reliably measured from waveform amplitude analysis of GJPR data, based on the consistent agreement with that obtained in the laboratory using TDR. Because of the uniformity of asphalts studied here, any correlation between dielectric constant and density is not yet apparent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das Gesundheitsmanagement von Milchkühen hat in den vergangenen Jahren auf den landwirtschaftlichen Betrieben an Bedeutung gewonnen. Neben Präventionsmaßnahmen zur Gesunderhaltung der Tiere ist die frühzeitige und systematische Erkennung von Erkrankungen hierbei der Hauptbestandteil. Es zeigt sich vermehrt, dass vor allem Transitkühe verstärkt an Stoffwechselerkrankungen in sowohl klinischer als auch subklinischer Form erkranken. Letztere stellen ein hohes Risiko dar, zum einen weil subklinische Erkrankungen oftmals nur schwer oder gar nicht erkannt werden und zum anderen, weil sie in vielen Fällen die Grundlage für meist schwerwiegendere Folgeerkrankungen sind. In der vorliegenden Studie wird das Thema der Früherkennung von subklinischen Ketosen und der subakuten Pansenazidose behandelt. Verschiedene Methoden wurden unter praktischen Versuchsbedingungen auf ihre Tauglichkeit zur Krankheitserkennung hin geprüft. In einer ersten Studie wurde auf einem konventionellen Milchviehbetrieb ein Ketose-Monitoring bei frischlaktierenden Kühen ab Tag 3 postpartum durchgeführt. Insgesamt 15 Tiere waren an einer subklinischen Ketose erkrankt, was eine Aufkommensrate von 26% in den untersuchten Tieren bedeutete. Die Blutproben von insgesamt 24 Tieren wurden auf ihren IL-6-Gehalt untersucht. Von den untersuchten Tieren waren 14 Tiere erkrankt, 10 Tiere bildeten die gesunde Kontrollgruppe. Interleukin-6 wurde bestimmt, da dem Zytokin IL-6 in anderen Studien in Bezug auf Ketosen eine Rolle zugesprochen wurde. Die erwartete Erhöhung von IL-6 bei erkrankten Tieren konnte nicht festgestellt werden; die erkrankten Kühe zeigten vielmehr die niedrigsten IL-6 Werte der Studiengruppe. Insgesamt waren die IL-6 Konzentrationen auf einem niedrigen Niveau mit 27.2 pg/m l± 10.2. Es zeigte sich, dass die IL-6 Bestimmung im Blut hinsichtlich der Erkennung von subklinischen Ketosen nur eingeschränkt nutzbar ist. Es konnte ausschließlich eine schwache negative Korrelation zwischen Beta- Hydroxybutyrat (BHBA, Goldstandard für den Nachweis einer Ketose) und IL-6 detektiert werden. Zusätzlich zu den Blutanalysen wurde ebenfalls die tägliche Wiederkauaktivität mit dem „DairyCheck“ System bestimmt, welches kontinuierlich die charakteristischen Kaumuskelkontraktionen aufzeichnet und somit die Dauer des Wiederkäuens bestimmt werden kann. Es wurde geprüft, ob sich ketotische Tiere von nicht ketotischen Tieren hinsichtlich der täglichen Wiederkäuzeit unterscheiden. Milchkühe mit einer Ketose kauten im Schnitt 475 min/d ± 56 wieder, nach Genesung 497 min/d ± 48. Sie befanden sich somit im Durchschnitt immer unterhalb der gesunden Kontrollgruppe, welche 521 min/d ± 76 wiederkaute. Eine Korrelation zwischen der Wiederkauzeit und dem BHBA- Gehalt im Blut war nur sehr schwach ausgeprägt, nicht zuletzt da die Tiere generell eine hohe Variabilität in der Wiederkauaktivität zeigten. Bei einer weiteren Studie, ebenfalls auf einem Praxisbetrieb durchgeführt, wurde auf die Erkennung der subakuten Pansensazidose (SARA) fokussiert. Hierbei kam ein drahtloses, kommerziell verfügbares Bolussystem zum Einsatz, welches den pH Wert kontinuierlich im Retikulorumen misst. Es macht die Erkennung einer SARA auch unter Praxisbedingungen ohne invasive Methoden wie der Punktion möglich. Das Bolussystem wurde 24 Milchkühen kurz vor der Abkalbung oral eingegeben, um den pH-Wert während der gesamten Transitphase messen und überwachen zu können. Während in der Trockenstehphase nur vereinzelte SARA Fälle auftraten, erlitt ein Großteil der untersuchten Tiere in der Frühlaktation eine SARA. Auf Grundlage von pH-Werten von laktierenden Milchkühen, wurde zusätzlich eine Sensitivitätsanalyse von verschieden, bereits eingesetzten Nachweismethoden durchgeführt, um die Tauglichkeit für die SARA-Diagnostik zu untersuchen. Es handelte sich hierbei zum einen um einen SARA-Nachweis unter Heranziehung von Einzelwerten, Fress- und Wiederkäuzeiten, sowie ausgewählten Milchinhaltsstoffen und der Milchmenge. Die Analyse ergab, dass nahezu alle Nachweismethoden im Vergleich zur Langzeitmessung nur eingeschränkt zur SARA-Diagnostik nutzbar sind. In einem weiteren Teil der Studie wurde eine Kotfraktionierung bei den gleichen Tieren durchgeführt, um damit SARA-Tiere auch mittels der Kotanalyse erkennen kann. Es konnte gezeigt werden, dass zum einen die Ration einen Einfluss auf die Kotzusammensetzung hat (Trockensteherration versus Ration für Laktierende) zum anderen aber auch, dass eine SARA die Zusammensetzung des Kotes verändert. Hierfür wurden Kotproben ausschließlich von laktierenden Kühen untersucht, sodass der Einfluss der Ration ausgeschlossen werden konnte. Erhöhte Faseranteile im Kot von SARA - Kühen gaben Hinweis auf eine verminderte Verdaulichkeit. Dabei erwies sich vor allem die Hemizellulose als guter Parameter, um auf eine SARA schließen zu können. Die Versuchsbedingungen ließen es ebenfalls zu, die pH-Verläufe der Tiere in der Frühlaktation zu untersuchen. Eine Clusteranalyse von pH-Werten der ersten 12 Tage postpartum zeigte, dass die untersuchten Tiere trotz gleicher Haltungs- und Fütterungsbedingungen unterschiedliche pH-Wert Verläufe entwickelten. So gab es eine Gruppe von Milchkühen, die den pH-Wert stabil halten konnte, während die restlichen pH-Abfälle in verschiedenen Verläufen und Intensitäten aufzeigten. Es konnte ebenfalls aufgezeigt werden, dass Tiere innerhalb der Testherde unterschiedliche Schweregrade der SARA entwickelten. Auch in dieser Studie wurde deutlich, dass Tiere scheinbar unterschiedliche Möglichkeiten haben, auf ihre Umwelt zu reagieren, bzw. suboptimalen Bedingungen entgegenwirken zu können. Zusammengefasst wurden verschiedene Methoden zur Ketose- und SARA- Erkennung geprüft, von denen nur einzelne für die Praxis zu empfehlen sind. Die Variabilität der Tiere, sowohl bei der Ausprägung der Erkrankungen als auch bei den gemessenen Parametern verdeutlicht die Notwendigkeit, diese im Herden- und Gesundheitsmanagement in Zukunft stärker zu berücksichtigen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modeling of metal dust explosion phenomenon is important in order to safeguard industries from potential accidents. A key parameter of these models is the burning velocity, which represents the consumption rate of the reactants by the flame front, during the combustion process. This work is focused on the experimental determination of aluminium burning velocity, through an alternative method, called "Direct method". The study of the methods used and the results obtained is preceded by a general analysis on dust explosion phenomenon, flame propagation phenomenon, characteristics of the metals combustion process and standard methods for determining the burning velocity. The “Direct method” requires a flame propagating through a tube recorded by high-speed cameras. Thus, the flame propagation test is carried out inside a vertical prototype made of glass. The study considers two optical technique: the direct visualization of the light emitted by the flame and the Particle Image Velocimetry (PIV) technique. These techniques were used simultaneously and allow the determination of two velocities: the flame propagation velocity and the flow velocity of the unburnt mixture. Since the burning velocity is defined by these two quantities, its direct determination is done by substracting the flow velocity of the fresh mixture from the flame propagation velocity. The results obtained by this direct determination, are approximated by a linear curve and different non-linear curves, which show a fluctuating behaviour of burning velocity. Furthermore, the burning velocity is strongly affected by turbulence. Turbulence intensity can be evaluated from PIV technique data. A comparison between burning velocity and turbulence intensity highlighted that both have a similar trend.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The segment of the world population showing permanent or temporary lactose intolerance is quite significant. Because milk is a widely consumed food with an high nutritional value, technological alternatives have been sought to overcome this dilemma. Microfiltration combined with pasteurization can not only extend the shelf life of milk but can also maintain the sensory, functional, and nutritional properties of the product. This studied developed a pasteurized, microfiltered, lactose hydrolyzed (delactosed) skim milk (PMLHSM). Hydrolysis was performed using β-galactosidase at a concentration of 0.4mL/L and incubation for approximately 21h at 10±1°C. During these procedures, the degree of hydrolysis obtained (>90%) was accompanied by evaluation of freezing point depression, and the remaining quantity of lactose was confirmed by HPLC. Milk was processed using a microfiltration pilot unit equipped with uniform transmembrane pressure (UTP) ceramic membranes with a mean pore size of 1.4 μm and UTP of 60 kPa. The product was submitted to physicochemical, microbiological, and sensory evaluations, and its shelf life was estimated. Microfiltration reduced the aerobic mesophilic count by more than 4 log cycles. We were able to produce high-quality PMLHSM with a shelf life of 21 to 27d when stored at 5±1°C in terms of sensory analysis and proteolysis index and a shelf life of 50d in regard to total aerobic mesophile count and titratable acidity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flavanones (hesperidin, naringenin, naringin, and poncirin) in industrial, hand-squeezed orange juices and from fresh-in-squeeze machines orange juices were determined by HPLC/DAD analysis using a previously described liquid-liquid extraction method. Method validation including the accuracy was performed by using recovery tests. Samples (36) collected from different Brazilian locations and brands were analyzed. Concentrations were determined using an external standard curve. The limits of detection (LOD) and the limits of quantification (LOQ) calculated were 0.0037, 1.87, 0.0147, and 0.0066 mg 100 g(-1) and 0.0089, 7.84, 0.0302, and 0.0200 mg 100 g(-1) for naringin, hesperidin, poncirin, and naringenin, respectively. The results demonstrated that hesperidin was present at the highest concentration levels, especially in the industrial orange juices. Its average content and concentration range were 69.85 and 18.80-139.00 mg 100 g(-1). The other flavanones showed the lowest concentration levels. The average contents and concentration ranges found were 0.019, 0.01-0.30, and 0.12 and 0.1-0.17, 0.13, and 0.01-0.36 mg 100 g(-1), respectively. The results were also evaluated using the principal component analysis (PCA) multivariate analysis technique which showed that poncirin, naringenin, and naringin were the principal elements that contributed to the variability in the sample concentrations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.