967 resultados para Point interpolation method
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
A környezeti hatások rendszerint túlmutatnak egy vállalat határain, éppen ezért az ellátási lánc kontextusban a környezeti szempontok érvényesítése során fontos szerep jut a beszerzési döntéseknek is. Számos olyan példát lehetne említeni, amikor egy adott szempont szerint egy alternatíva környezetileg előnyös, de az ellátási lánc egészét nézve már környezetterhelő. A környezeti hatások ellátási lánc szinten való mérése azonban komoly kihívásokat jelent. Ezzel jelentős kutatásokat és fejlesztéseket inspirált a téma. Az egyik olyan terület, amelyben komoly kutatási eredmények születtek, az a környezeti szempontok beszállítói értékelésbe való beépítése. A kutatások ezen irányához csatlakozva a szerzők tanulmányunkban azt keresik, hogyan lehet meghatározni az egyik legáltalánosabban használt szállítóértékelési módszerben, a súlyozott pontrendszerben egy adott szemponthoz azt a súlyt, amely mellett az adott szempont már döntésbefolyásoló tényezővé válik. Ehhez a DEA (Data Envelopment Analysis) összetett indikátorok (Composite Indicators, CI) módszerét alkalmazzák. A szempontok közös súlyának fontossága megállapításához a lineáris programozás elméletét használják. _____ Management decisions often have an environmental effect not just within the company, but outside as well, this is why supply chain context is highlighted in literature. Measuring environmental issues of supply decisions raise a lot of problems from methodological and practical point of view. This inspires a rapidly growing literature as a lot of studies were published focusing on how to incorporate environmental issues into supplier evaluation. This paper contributes to this stream of research as it develops a method to help weight selection. In the authors’ paper the method of Data Envelope Analysis (DEA) is used to study the extension of traditional supplier selection methods with environmental factors. The selection of the weight system can control the result of the selection process.
Resumo:
Stylization is a method of ornamental plant use usually applied in urban open space and garden design based on aesthetic consideration. Stylization can be seen as a nature-imitating ornamental plant application which evokes the scenery rather than an ecological plant application which assists the processes and functions observed in the nature. From a different point of view, stylization of natural or semi-natural habitats can sometimes serve as a method for preserving the physiognomy of the plant associations that may be affected by the climate change of the 21st century. The vulnerability of the Hungarian habitats has thus far been examined by the researchers only from the botanical point of view but not in terms of its landscape design value. In Hungary coniferous forests are edaphic and classified on this basis. The General National Habitat Classification System (Á-NÉR) distinguishes calcareous Scots pine forests and acidofrequent coniferous forests. The latter seems to be highly sensitive to climate change according to ecological models. The physiognomy and species pool of its subtypes are strongly determined by the dominant coniferous species that can be Norway spruce (Picea abies) or Scots pine (Pinus sylvestris). We are going to discuss the methodology of stylization of climate sensitive habitats and briefly refer to acidofrequent coniferous forests as a case study. In the course of stylization those coniferous and deciduous tree species of the studied habitat that are water demanding should be substituted by drought tolerant ones with similar characteristics. A list of the proposed taxa is going to be given.
Resumo:
Stylization is a common method of ornamental plant use that imitates nature and evokes the scenery. This paper discloses a not yet proposed aspect of stylization, since the method offers the possibility of preserving the physiognomy of those habitats that seem to vanish due to future climate change. In addition, novelty of the method is founded also on that vulnerability of the Hungarian habitats has been examined by the researchers only from the botanical and ecological point of view so far and not in terms of its landscape design value. In Hungary, acidofrequent mixed forests appear to be highly sensitive to climate change according to ecological models. We are going to discuss the methodology of stylization of climate sensitive habitats and briefly refer to acidofrequent mixed forests as a case study. Those coniferous and deciduous tree species of the studied habitat that are water demanding are proposed to be substituted by drought tolerant ones with similar characteristics, and an optionally expandable list of these taxa is presented. Based on this the authors suggest experimental investigations of those of the proposed taxa for which the higher drought tolerance is based on observations only.
Resumo:
A tanulmány a mikroökonómia eszközrendszerét és a hazai gépjárműpiac 2013-as adatait segítségül hívva egy új módszert mutat be az ármeghatározás területén. A kutatás központi kérdése az, hogy hol található az a pont, amikor a fogyasztó elégedett a kínált minőséggel és árral – lehetőleg megfelelő időben – és a vállalat is elégedett a megszerzett profittal. A tanulmányban tehát az ármeghatározás során központi szerepet játszik a minőség és az idő, mint értékteremtő funkció. Az elemzés egyik legfőbb következtetése, hogy a profitmaximumból levezetett optimális ár a minőség és az idő különböző paraméterei mellett meghatározható. A módszer segítségével a vállalatok közgazdasági eszközrendszer segítségével kapnak egy új szemléletet működési paramétereik és egyben versenyprioritásaik (ár, költség, minőségszint, idő) felállításához. _____ The study points to a new method for determining price with the tools of microeconomics and data of the Hungarian car market. The focus of the research is on where to find the point where the consumer is satisfied with the quality and price offered – preferably right time – and the company is satisfied with the profit achieved. In this study, therefore, in setting prices plays a central role the quality and time as a value-added feature. One of the main conclusions of the analysis is that the optimal price can be determined by various parameters of the quality and time. The method of using the economic tools help companies get a new perspective and to set up their optimal operating parameters (price, cost, quality level, time).
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
Although calorie information at the point-of-purchase at fast food restaurants is proposed as a method to decrease calorie choices and combat obesity, research results have been mixed. Much of the supportive research has weak methodology, and is limited. There is a demonstrated need to develop better techniques to assist consumers to make lower calorie food choices. Eating at fast food restaurants has been positively associated with weight gain. The current study explored the possibility of adding exercise equivalents (EE) (physical activity required to burn off the calories in the food), along with calorie information as a possible way to facilitate lower calorie choice at the point-of-choice in fast food restaurants. This three-group experimental study, in 18-34 year old, overweight and obese women, examines whether presenting caloric information in the form of EE at the point-of-choice at fast food restaurants, will lead to lower calorie food choices compared to presenting simple caloric information or no information at all. Methods. A randomized repeated measures experiment was conducted. Participants ordered a fast food meal from Burger King with menus that contained only the names of the food choices (Lunch 1). One week later (Lunch 2), study participants were given one of three menus that varied: no information, calorie information, or calorie information and EE. Study participants included 62 college aged students. Additionally, the study controlled for dietary restraint by blocking participants, before randomization, to the three groups. Results. A repeated measures analysis of variance was conducted. The study was not sufficiently powered, and while the study was designed to determine large effect sizes, a small effect size of .026, was determined. No significant differences were found in the foods ordered among the various menu conditions. Conclusion. Menu labeling alone might not be enough to reduce calories at the point-of-choice at restaurants. Additional research is necessary to determine if calorie information and EE at the point-of-choice would lead to fewer calories chosen at a meal. Studies should also look at long-term, repeated exposure to determine the effectiveness of calories and or EE at the point-of-choice at fast food restaurants.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other fields. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.
Resumo:
Although calorie information at the point-of-purchase at fast food restaurants is proposed as a method to decrease calorie choices and combat obesity, research results have been mixed. Much of the supportive research has weak methodology, and is limited. There is a demonstrated need to develop better techniques to assist consumers to make lower calorie food choices. Eating at fast food restaurants has been positively associated with weight gain. The current study explored the possibility of adding exercise equivalents (EE) (physical activity required to burn off the calories in the food), along with calorie information as a possible way to facilitate lower calorie choice at the point-of-choice in fast food restaurants. This three-group experimental study, in 18-34 year old, overweight and obese women, examines whether presenting caloric information in the form of EE at the point-of-choice at fast food restaurants, will lead to lower calorie food choices compared to presenting simple caloric information or no information at all. Methods: A randomized repeated measures experiment was conducted. Participants ordered a fast food meal from Burger King with menus that contained only the names of the food choices (Lunch 1). One week later (Lunch 2), study participants were given one of three menus that varied: no information, calorie information, or calorie information and EE. Study participants included 62 college aged students. Additionally, the study controlled for dietary restraint by blocking participants, before randomization, to the three groups. Results: A repeated measures analysis of variance was conducted. The study was not sufficiently powered, and while the study was designed to determine large effect sizes, a small effect size of .026, was determined. No significant differences were found in the foods ordered among the various menu conditions. Conclusion: Menu labeling alone might not be enough to reduce calories at the point-of-choice at restaurants. Additional research is necessary to determine if calorie information and EE at the point-of-choice would lead to fewer calories chosen at a meal. Studies should also look at long-term, repeated exposure to determine the effectiveness of calories and or EE at the point-of-choice at fast food restaurants.
Resumo:
The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.
Resumo:
Objectives: Hospital discharge is a transition of care, where medication discrepancies are likely to occur and potentially cause patient harm. The purpose of our study was to assess the prescribing accuracy of hospital discharge medication orders at a London, UK teaching hospital. The timeliness of the discharge summary reaching the general practitioner (GP, family physician) was also assessed based on the 72 h target referenced in the Care Quality Commission report.1 Method: 501 consecutive discharge medication orders from 142 patients were examined and the following records were compared (1) the final inpatient drug chart at the point of discharge, (2) printed signed copy of the initial to take away (TTA) discharge summary produced electronically by the physician, (3) the pharmacist's amendments on the initial TTA that were hand written, (4) the final electronic patient discharge summary record, (5) the patients final take home medication from the hospital. Discrepancies between the physician's order (6) and pharmacist's change(s) (7) were compared with two types of failures – ‘failure to make a required change’ and ‘change where none was required’. Once the patient was discharged, the patient's GP, was contacted 72 h after discharge to see if the patient discharge summary, sent by post or via email, was received. Results: Over half the patients seen (73 out of 142) patients had at least one discrepancy that was made on the initial TTA by the doctor and amended by the pharmacist. Out of the 501 drugs, there were 140 discrepancies, 108 were ‘failures to make a required change’ (77%) and 32 were ‘changes where none were required’ (23%). The types of ‘failures to make required changes’ discrepancies that were found between the initial TTA and pharmacist's amendments were paracetamol and ibuprofen changes (dose banding) 38 (27%), directions of use 34 (24%), incorrect formulation of medication 28 (20%) and incorrect strength 8 (6%). The types of ‘changes where none were required discrepancies’ were omitted medication 15 (11%), unnecessary drug 14 (10%) and incorrect medicine including spelling mistakes 3 (2%). After contacting the GPs of the discharged patients 72 h postdischarge; 49% had received the discharge summary and 45% had not, the remaining 6% were patients who were discharged without a GP. Conclusion: This study shows that doctor prescribing at discharge is often not accurate, and interventions made by pharmacist to reconcile are important at this point of care. It was also found that half the discharge summaries had not reached the patient's family physician (according to the GP) within 72 h.
Resumo:
Aims: Measurement of glycated hemoglobin (HbA1c) is an important indicator of glucose control over time. Point-of-care (POC) devices allow for rapid and convenient measurement of HbA1c, greatly facilitating diabetes care. We assessed two POC analyzers in the Peruvian Amazon where laboratory-based HbA1c testing is not available.
Methods: Venous blood samples were collected from 203 individuals from six different Amazonian communities with a wide range of HbA1c, 4.4-9.0% (25-75 mmol/mol). The results of the Afinion AS100 and the DCA Vantage POC analyzers were compared to a central laboratory using the Premier Hb9210 high-performance liquid chromatography (HPLC) method. Imprecision was assessed by performing 14 successive tests of a single blood sample.
Results: The correlation coefficient r for POC and HPLC results was 0.92 for the Afinion and 0.93 for the DCA Vantage. The Afinion generated higher HbA1c results than the HPLC (mean difference = +0.56% [+6 mmol/mol]; p < 0.001), as did the DCA Vantage (mean difference = +0.32% [4 mmol/mol]). The bias observed between POC and HPLC did not vary by HbA1c level for the DCA Vantage (p = 0.190), but it did for the Afinion (p < 0.001). Imprecision results were: CV = 1.75% for the Afinion, CV = 4.01% for the DCA Vantage. Sensitivity was 100% for both devices, specificity was 48.3% for the Afinion and 85.1% for the DCA Vantage, positive predictive value (PPV) was 14.4% for the Afinion and 34.9% for the DCA Vantage, and negative predictive value (NPV) for both devices was 100%. The area under the receiver operating characteristic (ROC) curve was 0.966 for the Afinion and 0.982 for the DCA Vantage. Agreement between HPLC and POC in classifying diabetes and prediabetes status was slight for the Afinion (Kappa = 0.12) and significantly different (McNemar’s statistic = 89; p < 0.001), and moderate for the DCA Vantage (Kappa = 0.45) and significantly different (McNemar’s statistic = 28; p < 0.001).
Conclusions: Despite significant variation of HbA1c results between the Afinion and DCA Vantage analyzers compared to HPLC, we conclude that both analyzers should be considered in health clinics in the Peruvian Amazon for therapeutic adjustments if healthcare workers are aware of the differences relative to testing in a clinical laboratory. However, imprecision and bias were not low enough to recommend either device for screening purposes, and the local prevalence of anemia and malaria may interfere with diagnostic determinations for a substantial portion of the population.