865 resultados para Multi-Point Method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo do presente estudo foi avaliar a prevalência de ingestão inadequada de nutrientes em um grupo de adolescentes de São Bernardo do Campo-SP. Dados de consumo de energia e nutrientes foram obtidos por meio de recordatórios de 24 horas aplicados em 89 adolescentes. A prevalência de inadequação foi calculada utilizando o método EAR como ponto de corte, após ajuste pela variabilidade intrapessoal, utilizando o procedimento desenvolvido pela Iowa State University. As Referências de Ingestão Dietética (IDR) foram os valores de referência para ingestão. Para os nutrientes que não possuem EAR estabelecida, a distribuição do consumo foi comparada com a AI. As maiores prevalências de inadequação em ambos sexos foram observadas para o magnésio (99,3 por cento para o sexo masculino e 81,8 por cento para o feminino), zinco (44,0 por cento para o sexo masculino e 23,5 por cento para o feminino), vitamina C (57,2 por cento para o sexo masculino e 59,9 por cento para o feminino) e folato (34,8 por cento para o sexo feminino). A proporção de indivíduos com ingestão superior à AI foi insignificante (menor que 2,0 por cento) em ambos os sexos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We illustrate the flow behaviour of fluids with isotropic and anisotropic microstructure (internal length, layering with bending stiffness) by means of numerical simulations of silo discharge and flow alignment in simple shear. The Cosserat theory is used to provide an internal length in the constitutive model through bending stiffness to describe isotropic microstructure and this theory is coupled to a director theory to add specific orientation of grains to describe anisotropic microstructure. The numerical solution is based on an implicit form of the Material Point Method developed by Moresi et al. [1].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent trend in distributed computer-controlled systems (DCCS) is to interconnect the distributed computing elements by means of multi-point broadcast networks. Since the network medium is shared between a number of network nodes, access contention exists and must be solved by a medium access control (MAC) protocol. Usually, DCCS impose real-time constraints. In essence, by real-time constraints we mean that traffic must be sent and received within a bounded interval, otherwise a timing fault is said to occur. This motivates the use of communication networks with a MAC protocol that guarantees bounded access and response times to message requests. PROFIBUS is a communication network in which the MAC protocol is based on a simplified version of the timed-token protocol. In this paper we address the cycle time properties of the PROFIBUS MAC protocol, since the knowledge of these properties is of paramount importance for guaranteeing the real-time behaviour of a distributed computer-controlled system which is supported by this type of network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main objective of this work is to report on the development of a multi-criteria methodology to support the assessment and selection of an Information System (IS) framework in a business context. The objective is to select a technological partner that provides the engine to be the basis for the development of a customized application for shrinkage reduction on the supply chains management. Furthermore, the proposed methodology di ers from most of the ones previously proposed in the sense that 1) it provides the decision makers with a set of pre-defined criteria along with their description and suggestions on how to measure them and 2)it uses a continuous scale with two reference levels and thus no normalization of the valuations is required. The methodology here proposed is has been designed to be easy to understand and use, without a specific support of a decision making analyst.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questions: A multiple plot design was developed for permanent vegetation plots. How reliable are the different methods used in this design and which changes can we measure? Location: Alpine meadows (2430 m a.s.l.) in the Swiss Alps. Methods: Four inventories were obtained from 40 m(2) plots: four subplots (0.4 m(2)) with a list of species, two 10m transects with the point method (50 points on each), one subplot (4 m2) with a list of species and visual cover estimates as a percentage and the complete plot (40 m(2)) with a list of species and visual estimates in classes. This design was tested by five to seven experienced botanists in three plots. Results: Whatever the sampling size, only 45-63% of the species were seen by all the observers. However, the majority of the overlooked species had cover < 0.1%. Pairs of observers overlooked 10-20% less species than single observers. The point method was the best method for cover estimate, but it took much longer than visual cover estimates, and 100 points allowed for the monitoring of only a very limited number of species. The visual estimate as a percentage was more precise than classes. Working in pairs did not improve the estimates, but one botanist repeating the survey is more reliable than a succession of different observers. Conclusion: Lists of species are insufficient for monitoring. It is necessary to add cover estimates to allow for subsequent interpretations in spite of the overlooked species. The choice of the method depends on the available resources: the point method is time consuming but gives precise data for a limited number of species, while visual estimates are quick but allow for recording only large changes in cover. Constant pairs of observers improve the reliability of the records.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To further validate the doubly labeled water method for measurement of CO2 production and energy expenditure in humans, we compared it with near-continuous respiratory gas exchange in nine healthy young adult males. Subjects were housed in a respiratory chamber for 4 days. Each received 2H2(18)O at either a low (n = 6) or a moderate (n = 3) isotope dose. Low and moderate doses produced initial 2H enrichments of 5 and 10 X 10(-3) atom percent excess, respectively, and initial 18O enrichments of 2 and 2.5 X 10(-2) atom percent excess, respectively. Total body water was calculated from isotope dilution in saliva collected at 4 and 5 h after the dose. CO2 production was calculated by the two-point method using the isotopic enrichments of urines collected just before each subject entered and left the chamber. Isotope enrichments relative to predose samples were measured by isotope ratio mass spectrometry. At low isotope dose, doubly labeled water overestimated average daily energy expenditure by 8 +/- 9% (SD) (range -7 to 22%). At moderate dose the difference was reduced to +4 +/- 5% (range 0-9%). The isotope elimination curves for 2H and 18O from serial urines collected from one of the subjects showed expected diurnal variations but were otherwise quite smooth. The overestimate may be due to approximations in the corrections for isotope fractionation and isotope dilution. An alternative approach to the corrections is presented that reduces the overestimate to 1%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työssä tutkittiin sulfonoitujen polystyreenidivinyylibentseenirunkoisten geeli-, meso- ja makrohuokoistenioninvaihtohartsien rakennetta käyttäen useita eri karakterisointimenetelmiä. Lisäksi työssä tutkittiin hartsien huokoskoon vaikutusta aminohappojen kromatografisessa erotuksessa. Työn pääpaino oli hartsien huokoskoon ja huokoisuuden määrittämisessä. Sen selvittämiseksi käytettiin hyväksi elektronimikroskopiaa, typpiadsorptiomittauksia, sekä käänteistä kokoekskluusiokromatografiaa. Parhaat tulokset saatiin käänteisellä kokoekskluusiokromatografialla, joka perustuu erikokoisten dekstraanipolymeerien käyttöön mallimolekyyleinä. Menetelmä sopii meso- ja makrohuokoisuuden tutkimiseen, mutta sen heikkoutena on erittäin pitkä mittausaika. Menetelmä antaa myös huokoskokojakauman, mutta yhden hartsin mittaaminen voi kestää viikon. Menetelmää muutettiin siten, että käytettiin määritettävää huokoskokoaluetta kuvaavien kahden dekstraanipolymeerin seosta. Kromatografiset ajo-olosuhteet optimoitiin sellaisiksi, että injektoidussa seoksessa olevien dekstraanien vastehuiput erottuivat toisistaan. Tällöin voitiin luotettavasti määrittää tutkittavan stationaarifaasin suhteellinen huokoisuus. Tätä työssä kehitettyä nopeaa käänteiseen kokoekskluusiokromatografiaan perustuvaa menetelmää kutsutaan kaksipistemenetelmäksi. Hartsien sulfonihapporyhmien määrää ja jakautumista tutkittiin määrittämällä hartsien kationinvaihtokapasiteetti sekä tutkimalla hartsin pintaa konfokaali-Raman-spektroskopian avulla. Sulfonihapporyhmien ioninvaihtokyvyn selvittämiseksi mitattiin K+-muotoon muutetusta hartsista S/K-suhde poikkileikkauspinnasta. Tulosten perusteella hartsit olivat tasaisesti sulfonoituneet ja 95 % rikkiatomeista oli toimivassa ioninvaihtoryhmässä. Aminohappojen erotuksessa malliaineina oli lysiini, seriini ja tryptofaani. Hartsi oli NH4+-muodossa ja petitilavuus oli 91 mL. Eluenttina käytettiin vettä, jonka pH oli 10. Paras tulos saatiin virtausnopeudella 0,1 mL/min, jolla kaikki kolme aminohappoa erottuivat toisistaan Finex Oy:n mesohuokoisella KEF78-hartsilla. Muilla tutkituilla hartseilla kaikki kolme aminohappoa eivät missään ajo-olosuhteissa erottuneet täysin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A theory for the description of turbulent boundary layer flows over surfaces with a sudden change in roughness is considered. The theory resorts to the concept of displacement in origin to specify a wall function boundary condition for a kappa-epsilon model. An approximate algebraic expression for the displacement in origin is obtained from the experimental data by using the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963). This expression is subsequently included in the near wall logarithmic velocity profile, which is then adopted as a boundary condition for a kappa-epsilon modelling of the external flow. The results are compared with the lower atmospheric observations made by Bradley(Q. J. Roy. Meteo. Soc., vol. 94, pp. 361-379, 1968) as well as with velocity profiles extracted from a set of wind tunnel experiments carried out by Avelino et al.( 7th ENCIT, 1998). The measurements are found to be in good agreement with the theoretical computations. The skin-friction coefficient was calculated according to the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963) and to a balance of the integral momentum equation. In particular, the growth of the internal boundary layer thickness obtained from the numerical simulation is compared with predictions of the experimental data calculated by two methods, the "knee" point method and the "merge" point method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vertebrate gap junctions are aggregates of transmembrane channels which are composed of connexin (Cx) proteins encoded by at least fourteen distinct genes in mammals. Since the same Cx type can be expressed in different tissues and more than one Cx type can be expressed by the same cell, the thorough identification of which connexin is in which cell type and how connexin expression changes after experimental manipulation has become quite laborious. Here we describe an efficient, rapid and simple method by which connexin type(s) can be identified in mammalian tissue and cultured cells using endonuclease cleavage of RT-PCR products generated from "multi primers" (sense primer, degenerate oligonucleotide corresponding to a region of the first extracellular domain; antisense primer, degenerate oligonucleotide complementary to the second extracellular domain) that amplify the cytoplasmic loop regions of all known connexins except Cx36. In addition, we provide sequence information on RT-PCR primers used in our laboratory to screen individual connexins and predictions of extension of the "multi primer" method to several human connexins.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pesticides in “PERA” orange samples (N = 57) from São Paulo City, Brazil were assessed and the pesticide intake contribution was estimated for chronic risk assessment. Seventy-six pesticides were evaluated by the gas chromatography multi-residue method, including isomers and metabolites (4.332 determinations). The mean recoveries at the limit of quantification level were in the range of 72-115% and the relative standard deviation for five replicate samples was 1-11%. The limits of detection and quantification ranged from 0.005 to 0.4 mg.kg−1 and from 0.01 to 0.8 mg.kg−1, respectively. Pesticides were found in 42.1% of the samples at levels ranging from 0.06 to 2.9 mg.kg−1. Of the contaminated samples, 3.5% contained residues (bifenthrin and clofentezine) above the maximum residue level and 12.3% contained unauthorized pesticides (azinphos-ethyl, parathion, myclobutanil, profenofos, and fenitrothion). The estimated risk characterization for orange intake by adults and children, respectively, ranged from 0.04 to 6.6% and from 0.1 to 26.5% of the acceptable daily intake. The detection of irregular residues emphasizes the need for better implementation of Good Agriculture Practices and greater control of formulated products. Other pesticides surveyed did not pose a health risk due to consumption.