59 resultados para Production Inventory Model with Switching Time
Resumo:
A number of security models have been proposed for RFID systems. Recent studies show that current models tend to be limited in the number of properties they capture. Consequently, models are commonly unable to distinguish between protocols with regard to finer privacy properties. This paper proposes a privacy model that introduces previously unavailable expressions of privacy. Based on the well-studied notion of indistinguishability, the model also strives to be simpler, easier to use, and more intuitive compared to previous models.
Resumo:
A number of security models have been proposed for RFID systems. Recent studies show that current models tend to be limited in the number of properties they capture. Consequently, models are commonly unable to distinguish between protocols with regard to finer privacy properties. This paper proposes a privacy model that introduces previously unavailable expressions of privacy. Based on the well-studied notion of indistinguishability, the model also strives to be simpler, easier to use, and more intuitive compared to previous models.
Resumo:
We developed a reproducible model of deep dermal partial thickness burn injury in juvenile Large White pigs. The contact burn is created using water at 92 degrees C for 15s in a bottle with the bottom replaced with plastic wrap. The depth of injury was determined by a histopathologist who examined tissue sections 2 and 6 days after injury in a blinded manner. Upon creation, the circular wound area developed white eschar and a hyperaemic zone around the wound border. Animals were kept for 6 weeks or 99 days to examine the wound healing process. The wounds took between 3 and 5 weeks for complete re-epithelialisation. Most wounds developed contracted, purple, hypertrophic scars. On measurement, the thickness of the burned skin was approximately 1.8 times that of the control skin at week 6 and approximately 2.2 times thicker than control skin at 99 days after injury. We have developed various methods to assess healing wounds, including digital photographic analysis, depth of organising granulation tissue, immunohistochemistry, electron microscopy and tensiometry. Immunohistochemistry and electron microscopy showed that our porcine hypertrophic scar appears similar to human hypertrophic scarring. The development of this model allows us to test and compare different treatments on burn wounds.
Resumo:
Spatially-explicit modelling of grassland classes is important to site-specific planning for improving grassland and environmental management over large areas. In this study, a climate-based grassland classification model, the Comprehensive and Sequential Classification System (CSCS) was integrated with spatially interpolated climate data to classify grassland in Gansu province, China. The study area is characterized by complex topographic features imposed by plateaus, high mountains, basins and deserts. To improve the quality of the interpolated climate data and the quality of the spatial classification over this complex topography, three linear regression methods, namely an analytic method based on multiple regression and residues (AMMRR), a modification of the AMMRR method through adding the effect of slope and aspect to the interpolation analysis (M-AMMRR) and a method which replaces the IDW approach for residue interpolation in M-AMMRR with an ordinary kriging approach (I-AMMRR), for interpolating climate variables were evaluated. The interpolation outcomes from the best interpolation method were then used in the CSCS model to classify the grassland in the study area. Climate variables interpolated included the annual cumulative temperature and annual total precipitation. The results indicated that the AMMRR and M-AMMRR methods generated acceptable climate surfaces but the best model fit and cross validation result were achieved by the I-AMMRR method. Twenty-six grassland classes were classified for the study area. The four grassland vegetation classes that covered more than half of the total study area were "cool temperate-arid temperate zonal semi-desert", "cool temperate-humid forest steppe and deciduous broad-leaved forest", "temperate-extra-arid temperate zonal desert", and "frigid per-humid rain tundra and alpine meadow". The vegetation classification map generated in this study provides spatial information on the locations and extents of the different grassland classes. This information can be used to facilitate government agencies' decision-making in land-use planning and environmental management, and for vegetation and biodiversity conservation. The information can also be used to assist land managers in the estimation of safe carrying capacities which will help to prevent overgrazing and land degradation.
Resumo:
In the analysis of tagging data, it has been found that the least-squares method, based on the increment function known as the Fabens method, produces biased estimates because individual variability in growth is not allowed for. This paper modifies the Fabens method to account for individual variability in the length asymptote. Significance tests using t-statistics or log-likelihood ratio statistics may be applied to show the level of individual variability. Simulation results indicate that the modified method reduces the biases in the estimates to negligible proportions. Tagging data from tiger prawns (Penaeus esculentus and Penaeus semisulcatus) and rock lobster (Panulirus ornatus) are analysed as an illustration.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Resumo:
In this study, we investigate the qualitative and quantitative effects of an R&D subsidy for a clean technology and a Pigouvian tax on a dirty technology on environmental R&D when it is uncertain how long the research takes to complete. The model is formulated as an optimal stopping problem, in which the number of successes required to complete the R&D project is finite and learning about the probability of success is incorporated. We show that the optimal R&D subsidy with the consideration of learning is higher than that without it. We also find that an R&D subsidy performs better than a Pigouvian tax unless suppliers have sufficient incentives to continue cost-reduction efforts after the new technology success-fully replaces the old one. Moreover, by using a two-project model, we show that a uniform subsidy is better than a selective subsidy.
Resumo:
The significant challenge faced by government in demonstrating value for money in the delivery of major infrastructure resolves around estimating costs and benefits of alternative modes of procurement. Faced with this challenge, one approach is to focus on a dominant performance outcome visible on the opening day of the asset, as the means to select the procurement approach. In this case, value for money becomes a largely nominal concept and determined by selected procurement mode delivering, or not delivering, the selected performance outcome, and notwithstanding possible under delivery on other desirable performance outcomes, as well as possibly incurring excessive transaction costs. This paper proposes a mind-set change in this particular practice, to an approach in which the analysis commences with the conditions pertaining to the project and proceeds to deploy transaction cost and production cost theory to indicate a procurement approach that can claim superior value for money relative to other competing procurement modes. This approach to delivering value for money in relative terms is developed in a first-order procurement decision making model outlined in this paper. The model developed could be complementary to the Public Sector Comparator (PSC) in terms of cross validation and the model more readily lends itself to public dissemination. As a possible alternative to the PSC, the model could save time and money in preparation of project details to lesser extent than that required in the reference project and may send a stronger signal to the market that may encourage more innovation and competition.
Resumo:
Most studies examining the temperature–mortality association in a city used temperatures from one site or the average from a network of sites. This may cause measurement error as temperature varies across a city due to effects such as urban heat islands. We examined whether spatiotemporal models using spatially resolved temperatures produced different associations between temperature and mortality compared with time series models that used non-spatial temperatures. We obtained daily mortality data in 163 areas across Brisbane city, Australia from 2000 to 2004. We used ordinary kriging to interpolate spatial temperature variation across the city based on 19 monitoring sites. We used a spatiotemporal model to examine the impact of spatially resolved temperatures on mortality. Also, we used a time series model to examine non-spatial temperatures using a single site and the average temperature from three sites. We used squared Pearson scaled residuals to compare model fit. We found that kriged temperatures were consistent with observed temperatures. Spatiotemporal models using kriged temperature data yielded slightly better model fit than time series models using a single site or the average of three sites' data. Despite this better fit, spatiotemporal and time series models produced similar associations between temperature and mortality. In conclusion, time series models using non-spatial temperatures were equally good at estimating the city-wide association between temperature and mortality as spatiotemporal models.
Resumo:
Lean strategies have been developed to eliminate or reduce waste and thus improve operational efficiency in a manufacturing environment. However, in practice, manufacturers encounter difficulties to select appropriate lean strategies within their resource constraints and to quantitatively evaluate the perceived value of manufacturing waste reduction. This paper presents a methodology developed to quantitatively evaluate the contribution of lean strategies selected to reduce manufacturing wastes within the manufacturers’ resource (time) constraints. A mathematical model has been developed for evaluating the perceived value of lean strategies to manufacturing waste reduction and a step-by-step methodology is provided for selecting appropriate lean strategies to improve the manufacturing performance within their resource constraints. A computer program is developed in MATLAB for finding the optimum solution. With the help of a case study, the proposed methodology and developed model has been validated. A ‘lean strategy-wastes’ correlation matrix has been proposed to establish the relationship between the manufacturing wastes and lean strategies. Using the correlation matrix and applying the proposed methodology and developed mathematical model, authors came out with optimised perceived value of reduction of a manufacturer's wastes by implementing appropriate lean strategies within a manufacturer's resources constraints. Results also demonstrate that the perceived value of reduction of manufacturing wastes can significantly be changed based on policies and product strategy taken by a manufacturer. The proposed methodology can also be used in dynamic situations by changing the input in the programme developed in MATLAB. By identifying appropriate lean strategies for specific manufacturing wastes, a manufacturer can better prioritise implementation efforts and resources to maximise the success of implementing lean strategies in their organisation.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.
Resumo:
Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.