905 resultados para Hierarchical Bayes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a method to evaluate hierarchical image segmentation procedures, in order to enable comparisons between different hierarchical algorithms and of these with other (non-hierarchical) segmentation techniques (as well as with edge detectors) to be made. The proposed method builds up on the edge-based segmentation evaluation approach by considering a set of reference human segmentations as a sample drawn from the population of different levels of detail that may be used in segmenting an image. Our main point is that, since a hierarchical sequence of segmentations approximates such population, those segmentations in the sequence that best capture each human segmentation level of detail should provide the basis for the evaluation of the hierarchical sequence as a whole. A small computational experiment is carried out to show the feasibility of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgements MW and RVD have been supported by the German Federal Ministry for Education and Research via the BMBF Young Investigators Group CoSy-CC2 (grant 18 Marc Wiedermann et al. no. 01LN1306A). JFD thanks the Stordalen Foundation and BMBF (project GLUES) for financial support. JK acknowledges the IRTG 1740 funded by DFG and FAPESP. Coupled climate network analysis has been performed using the Python package pyunicorn (Donges et al, 2015a) that is available at https://github.com/pik-copan/pyunicorn.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgements MW and RVD have been supported by the German Federal Ministry for Education and Research via the BMBF Young Investigators Group CoSy-CC2 (grant 18 Marc Wiedermann et al. no. 01LN1306A). JFD thanks the Stordalen Foundation and BMBF (project GLUES) for financial support. JK acknowledges the IRTG 1740 funded by DFG and FAPESP. Coupled climate network analysis has been performed using the Python package pyunicorn (Donges et al, 2015a) that is available at https://github.com/pik-copan/pyunicorn.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgments This study was financed by FEDER funds through the Programa Operacional Factores de Competitividade— COMPETE, and National funds through the Portuguese Foundation for Science and Technology—FCT, within the scope of the projects PERSIST (PTDC/BIA-BEC/105110/2008), NETPERSIST (PTDC/ AAG-MAA/3227/2012), and MateFrag (PTDC/BIA-BIC/6582/2014). RP was supported by the FCT grant SFRH/BPD/73478/2010 and SFRH/BPD/109235/2015. PB was supported by EDP Biodiversity Chair. We thank Rita Brito and Marta Duarte for help during field work. We thank Chris Sutherland, Douglas Morris, William Morgan, and Richard Hassall for critical reviews of early versions of the paper. We also thank two anonymous reviewers for helpful comments to improve the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using survey data from 358 online customers, the study finds that the e-service quality construct conforms to the structure of a third-order factor model that links online service quality perceptions to distinct and actionable dimensions, including (1) website design, (2) fulfilment, (3) customer service, and (4) security/privacy. Each dimension is found to consist of several attributes that define the basis of e-service quality perceptions. A comprehensive specification of the construct, which includes attributes not covered in existing scales, is developed. The study contrasts a formative model consisting of 4 dimensions and 16 attributes against a reflective conceptualization. The results of this comparison indicate that studies using an incorrectly specified model overestimate the importance of certain e-service quality attributes. Global fit criteria are also found to support the detection of measurement misspecification. Meta-analytic data from 31,264 online customers are used to show that the developed measurement predicts customer behavior better than widely used scales, such as WebQual and E-S-Qual. The results show that the new measurement enables managers to assess e-service quality more accurately and predict customer behavior more reliably.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Marine heatwaves (MHWs) have been observed around the world and are expected to increase in intensity and frequency under anthropogenic climate change. A variety of impacts have been associated with these anomalous events, including shifts in species ranges, local extinctions and economic impacts on seafood industries through declines in important fishery species and impacts on aquaculture. Extreme temperatures are increasingly seen as important influences on biological systems, yet a consistent definition of MHWs does not exist. A clear definition will facilitate retrospective comparisons between MHWs, enabling the synthesis and a mechanistic understanding of the role of MHWs in marine ecosystems. Building on research into atmospheric heatwaves, we propose both a general and specific definition for MHWs, based on a hierarchy of metrics that allow for different data sets to be used in identifying MHWs. We generally define a MHW as a prolonged discrete anomalously warm water event that can be described by its duration, intensity, rate of evolution, and spatial extent. Specifically, we consider an anomalously warm event to be a MHW if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year historical baseline period. This structure provides flexibility with regard to the description of MHWs and transparency in communicating MHWs to a general audience. The use of these metrics is illustrated for three 21st century MHWs; the northern Mediterranean event in 2003, the Western Australia ‘Ningaloo Niño’ in 2011, and the northwest Atlantic event in 2012. We recommend a specific quantitative definition for MHWs to facilitate global comparisons and to advance our understanding of these phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Marine heatwaves (MHWs) have been observed around the world and are expected to increase in intensity and frequency under anthropogenic climate change. A variety of impacts have been associated with these anomalous events, including shifts in species ranges, local extinctions and economic impacts on seafood industries through declines in important fishery species and impacts on aquaculture. Extreme temperatures are increasingly seen as important influences on biological systems, yet a consistent definition of MHWs does not exist. A clear definition will facilitate retrospective comparisons between MHWs, enabling the synthesis and a mechanistic understanding of the role of MHWs in marine ecosystems. Building on research into atmospheric heatwaves, we propose both a general and specific definition for MHWs, based on a hierarchy of metrics that allow for different data sets to be used in identifying MHWs. We generally define a MHW as a prolonged discrete anomalously warm water event that can be described by its duration, intensity, rate of evolution, and spatial extent. Specifically, we consider an anomalously warm event to be a MHW if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year historical baseline period. This structure provides flexibility with regard to the description of MHWs and transparency in communicating MHWs to a general audience. The use of these metrics is illustrated for three 21st century MHWs; the northern Mediterranean event in 2003, the Western Australia ‘Ningaloo Niño’ in 2011, and the northwest Atlantic event in 2012. We recommend a specific quantitative definition for MHWs to facilitate global comparisons and to advance our understanding of these phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demand response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralized agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus, it is desirable to use a scalable decentralized algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for peak minimization based on Dantzig-Wolfe decomposition (DWD). In addition, a time weighted maximization option is included in the cost function, which improves the quality of service for devices seeking to receive their desired energy sooner rather than later. This paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Otto-von Guericke-Universität Magdeburg, Fakultät für Maschinenbau, Dissertation, 2016

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions inwhich the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with longdistance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.