941 resultados para Zero-deflation
Resumo:
Les modèles à sur-représentation de zéros discrets et continus ont une large gamme d'applications et leurs propriétés sont bien connues. Bien qu'il existe des travaux portant sur les modèles discrets à sous-représentation de zéro et modifiés à zéro, la formulation usuelle des modèles continus à sur-représentation -- un mélange entre une densité continue et une masse de Dirac -- empêche de les généraliser afin de couvrir le cas de la sous-représentation de zéros. Une formulation alternative des modèles continus à sur-représentation de zéros, pouvant aisément être généralisée au cas de la sous-représentation, est présentée ici. L'estimation est d'abord abordée sous le paradigme classique, et plusieurs méthodes d'obtention des estimateurs du maximum de vraisemblance sont proposées. Le problème de l'estimation ponctuelle est également considéré du point de vue bayésien. Des tests d'hypothèses classiques et bayésiens visant à déterminer si des données sont à sur- ou sous-représentation de zéros sont présentées. Les méthodes d'estimation et de tests sont aussi évaluées au moyen d'études de simulation et appliquées à des données de précipitation agrégées. Les diverses méthodes s'accordent sur la sous-représentation de zéros des données, démontrant la pertinence du modèle proposé. Nous considérons ensuite la classification d'échantillons de données à sous-représentation de zéros. De telles données étant fortement non normales, il est possible de croire que les méthodes courantes de détermination du nombre de grappes s'avèrent peu performantes. Nous affirmons que la classification bayésienne, basée sur la distribution marginale des observations, tiendrait compte des particularités du modèle, ce qui se traduirait par une meilleure performance. Plusieurs méthodes de classification sont comparées au moyen d'une étude de simulation, et la méthode proposée est appliquée à des données de précipitation agrégées provenant de 28 stations de mesure en Colombie-Britannique.
Resumo:
With more than two decades of weak economic performance since the bubble burst in the ‘90s, the Japanese deflationary scenario has become the economic fate every developed economy fears to become. As the euro area continues to experience sustained low inflation, studying the Japanese monetary policy may shed light on how to prevent persistent deflation. Using an SVAR methodology to understand the monetary transmission mechanism, we find some evidence that the euro area may possess characteristics that would eventually lead to a deflationary scenario. The extent of whether it would suffer the same Japanese fate would depend on how macroeconomic policies are timely coordinated as a response to its liquidity problem and increasing public debt across member states.
Resumo:
We explore the macroeconomic effects of a compression in the long-term bond yield spread within the context of the Great Recession of 2007–09 via a time-varying parameter structural VAR model. We identify a “pure” spread shock defined as a shock that leaves the policy rate unchanged, which allows us to characterize the macroeconomic consequences of a decline in the yield spread induced by central banks’ asset purchases within an environment in which the policy rate is constrained by the effective zero lower bound. Two key findings stand out. First, compressions in the long-term yield spread exert a powerful effect on both output growth and inflation. Second, conditional on available estimates of the impact of the Federal Reserve’s and the Bank of England’s asset purchase programs on long-term yield spreads, our counterfactual simulations suggest that U.S. and U.K. unconventional monetary policy actions have averted significant risks both of deflation and of output collapses comparable to those that took place during the Great Depression.
Resumo:
The structures of the anhydrous 1:1 proton-transfer compounds of 4,5-dichlorophthalic acid (DCPA) with the monocyclic heteroaromatic Lewis bases 2-aminopyrimidine, 3-(aminocarboxy) pyridine (nicotinamide) and 4-(aminocarbonyl) pyridine (isonicotinamide), namely 2-aminopyrimidinium 2-carboxy-4,5-dichlorobenzoate C4H6N3+ C8H3Cl2O4- (I), 3-(aminocarbonyl) pyridinium 2-carboxy-4,5-dichlorobenzoate C6H7N2O+ C8H3Cl2O4- (II) and the unusual salt adduct 4-(aminocarbonyl) pyridinium 2-carboxy-4,5-dichlorobenzoate 2-carboxymethyl-4,5-dichlorobenzoic acid (1/1/1) C6H7N2O+ C8H3Cl2O4-.C9H6Cl2O4 (III) have been determined at 130 K. Compound (I) forms discrete centrosymmetric hydrogen-bonded cyclic bis(cation--anion) units having both R2/2(8) and R2/1(4) N-H...O interactions. In compound (II) the primary N-H...O linked cation--anion units are extended into a two-dimensional sheet structure via amide-carboxyl and amide-carbonyl N-H...O interactions. The structure of (III) reveals the presence of an unusual and unexpected self-synthesized methyl monoester of the acid as an adduct molecule giving one-dimensional hydrogen-bonded chains. In all three structures the hydrogen phthalate anions are
Resumo:
In this work, natural palygorskite impregnated with zero-valent iron (ZVI) was prepared and characterised. The combination of ZVI particles on surface of fibrous palygorskite can help to overcome the disadvantage of ultra-fine powders which may have strong tendency to agglomerate into larger particles, resulting in an adverse effect on both effective surface area and catalyst performance. There is a significant increase of methylene blue (MB) decolourized efficiency on acid treated palygorskite with ZVI grafted, within 5 mins, the concentration of MB in the solution was decreased from 94 mg/L to around 20 mg/L and the equilibration was reached at about 30 to 60 mins with only around 10 mg/L MB remained in solution. Changes in the surface and structure of prepared materials were characterized using X-ray diffraction (XRD), infrared (IR) spectroscopy, surface analysing and scanning electron microscopy (SEM) with element analysis and mapping. Comparing with zero-valent iron and palygorskite, the presence of zero-valent iron reactive species on the palygorskite surface strongly increases the decolourization capacity for methylene blue, and it is significant for providing novel modified clay catalyst materials for the removal of organic contaminants from waste water.
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
The intent of this note is to succinctly articulate additional points that were not provided in the original paper (Lord et al., 2005) and to help clarify a collective reluctance to adopt zero-inflated (ZI) models for modeling highway safety data. A dialogue on this important issue, just one of many important safety modeling issues, is healthy discourse on the path towards improved safety modeling. This note first provides a summary of prior findings and conclusions of the original paper. It then presents two critical and relevant issues: the maximizing statistical fit fallacy and logic problems with the ZI model in highway safety modeling. Finally, we provide brief conclusions.
Resumo:
Zero energy buildings (ZEB) and zero energy homes (ZEH) are a current hot topic globally for policy makers (what are the benefits and costs), designers (how do we design them), the construction industry (can we build them), marketing (will consumers buy them) and researchers (do they work and what are the implications). This paper presents initial findings from actual measured data from a 9 star (as built), off-ground detached family home constructed in south-east Queensland in 2008. The integrated systems approach to the design of the house is analysed in each of its three main goals: maximising the thermal performance of the building envelope, minimising energy demand whilst maintaining energy service levels, and implementing a multi-pronged low carbon approach to energy supply. The performance outcomes of each of these stages are evaluated against definitions of Net Zero Carbon / Net Zero Emissions (Site and Source) and Net Zero Energy (onsite generation v primary energy imports). The paper will conclude with a summary of the multiple benefits of combining very high efficiency building envelopes with diverse energy management strategies: a robustness, resilience, affordability and autonomy not generally seen in housing.
Resumo:
The two-dimensional free surface flow of a finite-depth fluid into a horizontal slot is considered. For this study, the effects of viscosity and gravity are ignored. A generalised Schwarz-Christoffel mapping is used to formulate the problem in terms of a linear integral equation, which is solved exactly with the use of a Fourier transform. The resulting free surface profile is given explicitly in closed-form.
Resumo:
In this study, the delivery and portal imaging of one square-field and one conformal radiotherapy treatment was simulated using the Monte Carlo codes BEAMnrc and DOSXYZnrc. The treatment fields were delivered to a humanoid phantom from different angles by a 6 MV photon beam linear accelerator, with an amorphous-silicon electronic portal imaging device (a-Si EPID) used to provide images of the phantom generated by each field. The virtual phantom preparation code CTCombine was used to combine a computed-tomography-derived model of the irradiated phantom with a simple, rectilinear model of the a-Si EPID, at each beam angle used in the treatment. Comparison of the resulting experimental and simulated a-Si EPID images showed good agreement, within \[gamma](3%, 3 mm), indicating that this method may be useful in providing accurate Monte Carlo predictions of clinical a-Si EPID images, for use in the verification of complex radiotherapy treatments.