947 resultados para modeling and model calibration


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Queueing systems constitute a central tool in modeling and performance analysis. These types of systems are in our everyday life activities, and the theory of queueing systems was developed to provide models for forecasting behaviors of systems subject to random demand. The practical and useful applications of the discrete-time queues make the researchers to con- tinue making an e ort in analyzing this type of models. Thus the present contribution relates to a discrete-time Geo/G/1 queue in which some messages may need a second service time in addition to the rst essential service. In day-to-day life, there are numerous examples of queueing situations in general, for example, in manufacturing processes, telecommunication, home automation, etc, but in this paper a particular application is the use of video surveil- lance with intrusion recognition where all the arriving messages require the main service and only some may require the subsidiary service provided by the server with di erent types of strategies. We carry out a thorough study of the model, deriving analytical results for the stationary distribution. The generating functions of the number of messages in the queue and in the system are obtained. The generating functions of the busy period as well as the sojourn times of a message in the server, the queue and the system are also provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tropospheric ozone (O3) and carbon monoxide (CO) pollution in the Northern Hemisphere is commonly thought to be of anthropogenic origin. While this is true in most cases, copious quantities of pollutants are emitted by fires in boreal regions, and the impact of these fires on CO has been shown to significantly exceed the impact of urban and industrial sources during large fire years. The impact of boreal fires on ozone is still poorly quantified, and large uncertainties exist in the estimates of the fire-released nitrogen oxides (NO x ), a critical factor in ozone production. As boreal fire activity is predicted to increase in the future due to its strong dependence on weather conditions, it is necessary to understand how these fires affect atmospheric composition. To determine the scale of boreal fire impacts on ozone and its precursors, this work combined statistical analysis of ground-based measurements downwind of fires, satellite data analysis, transport modeling and the results of chemical model simulations. The first part of this work focused on determining boreal fire impact on ozone levels downwind of fires, using analysis of observations in several-days-old fire plumes intercepted at the Pico Mountain station (Azores). The results of this study revealed that fires significantly increase midlatitude summertime ozone background during high fire years, implying that predicted future increases in boreal wildfires may affect ozone levels over large regions in the Northern Hemisphere. To improve current estimates of NOx emissions from boreal fires, we further analyzed ΔNOy /ΔCO enhancement ratios in the observed fire plumes together with transport modeling of fire emission estimates. The results of this analysis revealed the presence of a considerable seasonal trend in the fire NOx /CO emission ratio due to the late-summer changes in burning properties. This finding implies that the constant NOx /CO emission ratio currently used in atmospheric modeling is unrealistic, and is likely to introduce a significant bias in the estimated ozone production. Finally, satellite observations were used to determine the impact of fires on atmospheric burdens of nitrogen dioxide (NO2 ) and formaldehyde (HCHO) in the North American boreal region. This analysis demonstrated that fires dominated the HCHO burden over the fires and in plumes up to two days old. This finding provides insights into the magnitude of secondary HCHO production and further enhances scientific understanding of the atmospheric impacts of boreal fires.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon Monoxide (CO) and Ozone (O3) are considered to be one of the most important atmospheric pollutants in the troposphere with both having significant effects on human health. Both are included in the U.S. E.P.A list of criteria pollutants. CO is primarily emitted in the source region whereas O3 can be formed near the source, during transport of the pollution plumes containing O3 precursors or in a receptor region as the plumes subside. The long chemical lifetimes of both CO and O3 enable them to be transported over long distances. This transport is important on continental scales as well, commonly referred to as inter-continental transport and affects the concentrations of both CO and O3 in downwind receptor regions, thereby having significant implications for their air quality standards. Over the period 2001-2011, there have been decreases in the anthropogenic emissions of CO and NOx in North America and Europe whereas the emissions over Asia have increased. How these emission trends have affected concentrations at remote sites located downwind of these continents is an important question. The PICO-NARE observatory located on the Pico Mountain in Azores, Portugal is frequently impacted by North American pollution outflow (both anthropogenic and biomass burning) and is a unique site to investigate long range transport from North America. This study uses in-situ observations of CO and O3 for the period 2001-2011 at PICO-NARE coupled with output from the full chemistry (with normal and fixed anthropogenic emissions) and tagged CO simulations in GEOS-Chem, a global 3-D chemical transport model of atmospheric composition driven by meteorological input from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling and Assimilation Office, to determine and interpret the trends in CO and O3 concentrations over the past decade. These trends would be useful in ascertaining the impacts emission reductions in the United States have had over Pico and in general over the North Atlantic. A regression model with sinusoidal functions and a linear trend term was fit to the in-situ observations and the GEOS-Chem output for CO and O3 at Pico respectively. The regression model yielded decreasing trends for CO and O3 with the observations (-0.314 ppbv/year & -0.208 ppbv/year respectively) and the full chemistry simulation with normal emissions (-0.343 ppbv/year & -0.526 ppbv/year respectively). Based on analysis of the results from the full chemistry simulation with fixed anthropogenic emissions and the tagged CO simulation it was concluded that the decreasing trends in CO were a consequence of the anthropogenic emission changes in regions such as USA and Asia. The emission reductions in USA are countered by Asian increases but the former have a greater impact resulting in decreasing trends for CO at PICO-NARE. For O3 however, it is the increase in water vapor content (which increases O3 destruction) along the pathways of transport from North America to PICO-NARE as well as around the site that has resulted in decreasing trends over this period. This decrease is offset by increase in O3 concentrations due to anthropogenic influence which could be due to increasing Asian emissions of O3 precursors as these emissions have decreased over the US. However, the anthropogenic influence does not change the final direction of the trend. It can thus be concluded that CO and O3 concentrations at PICO-NARE have decreased over 2001-2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social exchange theory and notions of reciprocity have long been assumed to explain the relationship between psychological contract breach and important employee outcomes. To date, however, there has been no explicit testing of these assumptions. This research, therefore, explores the mediating role of negative, generalized, and balanced reciprocity, in the relationships between psychological contract breach and employees’ affective organizational commitment and turnover intentions. A survey of 247 Pakistani employees of a large public university was analyzed using structural equation modeling and bootstrapping techniques, and provided excellent support for our model. As predicted, psychological contract breach was positively related to negative reciprocity norms and negatively related to generalized and balanced reciprocity norms. Negative and generalized (but not balanced) reciprocity were negatively and positively (respectively) related to employees’ affective organizational commitment and fully mediated the relationship between psychological contract breach and affective organizational commitment. Moreover, affective organizational commitment fully mediated the relationship between generalized and negative reciprocity and employees’ turnover intentions. Implications for theory and practice are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our goal here is a more complete understanding of how information about luminance contrast is encoded and used by the binocular visual system. In two-interval forced-choice experiments we assessed observers' ability to discriminate changes in contrast that could be an increase or decrease of contrast in one or both eyes, or an increase in one eye coupled with a decrease in the other (termed IncDec). The base or pedestal contrasts were either in-phase or out-of-phase in the two eyes. The opposed changes in the IncDec condition did not cancel each other out, implying that along with binocular summation, information is also available from mechanisms that do not sum the two eyes' inputs. These might be monocular mechanisms. With a binocular pedestal, monocular increments of contrast were much easier to see than monocular decrements. These findings suggest that there are separate binocular (B) and monocular (L,R) channels, but only the largest of the three responses, max(L,B,R), is available to perception and decision. Results from contrast discrimination and contrast matching tasks were described very accurately by this model. Stimuli, data, and model responses can all be visualized in a common binocular contrast space, allowing a more direct comparison between models and data. Some results with out-of-phase pedestals were not accounted for by the max model of contrast coding, but were well explained by an extended model in which gratings of opposite polarity create the sensation of lustre. Observers can discriminate changes in lustre alongside changes in contrast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrogen sulfide (H2S) is a widely recognized gasotransmitter, with key roles in physiological and pathological processes. The accurate quantification of H2S and reactive sulfur species (RSS) may hold important implications for the diagnosis and prognosis of various diseases. However, H2S species quantification in biological matrices is still a challenge. Among the sulfide detection methods, monobromobimane (MBB) derivatization coupled with reversed phase high-performance liquid chromatography (RP-HPLC) is one of the most reported. However, it is characterized by a complex preparation and time-consuming process, which may alter the actual H2S level. Moreover, quantitative validation has still not been described based on a survey of previously published works. In this study, we developed and validated an improved analytical protocol for the MBB RP-HPLC method. Main parameters like MBB concentration, temperature, reaction time, and sample handling were optimized, and the calibration method was further validated using leave-one-out cross-validation (CV) and tested in a clinical setting. The method shows high sensitivity and allows the quantification of H2S species, with a limit of detection (LOD) of 0.5 µM and a limit of quantification (LOQ) of 0.9 µM. Additionally, this model was successfully applied in measurements of H2S levels in the serum of patients subjected to inhalation with vapors rich in H2S. In addition, a properly procedure was established for H2S release with the modified MBB HPLC-FLD method. The proposed analytical approach demonstrated the slow-release kinetics of H2S from the multilayer Silk-Fibroin scaffolds with the combination of different H2S donor’s concentration with respect to the weight of PLGA nanofiber. In the end, some efforts were made on sulfide measurements by using size exclusion chromatography fluorescence/ultraviolet detection and inductively coupled plasma-mass spectrometry (SEC-FLD/UV-ICP/MS). It’s intended as a preliminary study in order to define the feasibility of a separation-detection-quantification platform to analyze biological samples and quantify sulfur species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The most widespread work-related diseases are musculoskeletal disorders (MSD) caused by awkward postures and excessive effort to upper limb muscles during work operations. The use of wearable IMU sensors could monitor the workers constantly to prevent hazardous actions, thus diminishing work injuries. In this thesis, procedures are developed and tested for ergonomic analyses in a working environment, based on a commercial motion capture system (MoCap) made of 17 Inertial Measurement Units (IMUs). An IMU is usually made of a tri-axial gyroscope, a tri-axial accelerometer, and a tri-axial magnetometer that, through sensor fusion algorithms, estimates its attitude. Effective strategies for preventing MSD rely on various aspects: firstly, the accuracy of the IMU, depending on the chosen sensor and its calibration; secondly, the correct identification of the pose of each sensor on the worker’s body; thirdly, the chosen multibody model, which must consider both the accuracy and the computational burden, to provide results in real-time; finally, the model scaling law, which defines the possibility of a fast and accurate personalization of the multibody model geometry. Moreover, the MSD can be diminished using collaborative robots (cobots) as assisted devices for complex or heavy operations to relieve the worker's effort during repetitive tasks. All these aspects are considered to test and show the efficiency and usability of inertial MoCap systems for assessing ergonomics evaluation in real-time and implementing safety control strategies in collaborative robotics. Validation is performed with several experimental tests, both to test the proposed procedures and to compare the results of real-time multibody models developed in this thesis with the results from commercial software. As an additional result, the positive effects of using cobots as assisted devices for reducing human effort in repetitive industrial tasks are also shown, to demonstrate the potential of wearable electronics in on-field ergonomics analyses for industrial applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we explore and demonstrate the potential for modeling and classification using quantile-based distributions, which are random variables defined by their quantile function. In the first part we formalize a least squares estimation framework for the class of linear quantile functions, leading to unbiased and asymptotically normal estimators. Among the distributions with a linear quantile function, we focus on the flattened generalized logistic distribution (fgld), which offers a wide range of distributional shapes. A novel naïve-Bayes classifier is proposed that utilizes the fgld estimated via least squares, and through simulations and applications, we demonstrate its competitiveness against state-of-the-art alternatives. In the second part we consider the Bayesian estimation of quantile-based distributions. We introduce a factor model with independent latent variables, which are distributed according to the fgld. Similar to the independent factor analysis model, this approach accommodates flexible factor distributions while using fewer parameters. The model is presented within a Bayesian framework, an MCMC algorithm for its estimation is developed, and its effectiveness is illustrated with data coming from the European Social Survey. The third part focuses on depth functions, which extend the concept of quantiles to multivariate data by imposing a center-outward ordering in the multivariate space. We investigate the recently introduced integrated rank-weighted (IRW) depth function, which is based on the distribution of random spherical projections of the multivariate data. This depth function proves to be computationally efficient and to increase its flexibility we propose different methods to explicitly model the projected univariate distributions. Its usefulness is shown in classification tasks: the maximum depth classifier based on the IRW depth is proven to be asymptotically optimal under certain conditions, and classifiers based on the IRW depth are shown to perform well in simulated and real data experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aims to understand the behavior of a low-rise unreinforced masonry building (URM), the typical residential house in the Netherlands, when subjected to low-intensity earthquakes. In fact, in the last decades, the Groningen region was hit by several shallow earthquakes caused by the extraction of natural gas. In particular, the focus is addressed to the internal non-structural walls and to their interaction with the structural parts of the building. A simple and cost-efficient 2D FEM model is developed, focused on the interfaces representing mortar layers that are present between the non-structural walls and the rest of the structure. As a reference for geometries and materials, it has been taken into consideration a prototype that was built in full-scale at the EUCENTRE laboratory of Pavia (Italy). Firstly, a quasi-static analysis is performed by gradually applying a prescribed displacement on the roof floor of the structure. Sensitivity analyses are conducted on some key parameters characterizing mortar. This analysis allows for the calibration of their values and the evaluation of the reliability of the model. Successively, a transient analysis is performed to effectively subject the model to a seismic action and hence also evaluate the mechanical response of the building over time. Moreover, it was possible to compare the results of this analysis with the displacements recorded in the experimental tests by creating a model representing the entire considered structure. As a result, some conditions for the model calibration are defined. The reliability of the model is then confirmed by both the reasonable results obtained from the sensitivity analysis and the compatibility of the values obtained for the top displacement of the roof floor of the experimental test, and the same value acquired from the structural model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O suprimento de tomates para processamento industrial é uma atividade relativamente complexa. Plantas industriais de larga escala necessitam de elevados volumes diários de matéria-prima. Por outro lado, há alta perecibilidade dos frutos e a colheita ainda é predominantemente manual. Um modelo matemático foi desenvolvido com o propósito de entender objetivamente o processo de suprimento de tomate e, também, vislumbrar possibilidades de sua otimização. A simulação a partir do modelo pode gerar cenários que, quando comparados com o desempenho efetivamente observado em campo, evidenciam a importância da gestão acurada, com a presença de potenciais ganhos financeiros expressivos na cadeia de suprimentos a partir da redução de tempos, perdas e custos. As perdas de produto poderiam ser reduzidas de mais de 2% para algo inferior a 1%. A menor capacidade ociosa traduzir-se-ia em um menor custo de oportunidade e aumento de receita. Para uma fábrica com um consumo de tomates de 336 mil toneladas por ano, a melhoria no suprimento de matéria-prima poderia resultar em ganhos estimados em R$ 6 milhões por ano.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Letter, we propose a new and model-independent cosmological test for the distance-duality (DD) relation, eta = D(L)(z)(1 + z)(-2)/D(A)(z) = 1, where D(L) and D(A) are, respectively, the luminosity and angular diameter distances. For D(L) we consider two sub-samples of Type Ia supernovae (SNe Ia) taken from Constitution data whereas D(A) distances are provided by two samples of galaxy clusters compiled by De Filippis et al. and Bonamente et al. by combining Sunyaev-Zeldovich effect and X-ray surface brightness. The SNe Ia redshifts of each sub-sample were carefully chosen to coincide with the ones of the associated galaxy cluster sample (Delta z < 0.005), thereby allowing a direct test of the DD relation. Since for very low redshifts, D(A)(z) approximate to D(L)(z), we have tested the DD relation by assuming that. is a function of the redshift parameterized by two different expressions: eta(z) = 1 + eta(0)z and eta(z) = 1 +eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying a possible departure from the strict validity of the reciprocity relation (eta(0) = 0). In the best scenario (linear parameterization), we obtain eta(0) = -0.28(-0.44)(+0.44) (2 sigma, statistical + systematic errors) for the De Filippis et al. sample (elliptical geometry), a result only marginally compatible with the DD relation. However, for the Bonamente et al. sample (spherical geometry) the constraint is eta(0) = -0.42(-0.34)(+0.34) (3 sigma, statistical + systematic errors), which is clearly incompatible with the duality-distance relation.