966 resultados para Masaniello, Tommaso Anielle, Known as, 1620-1647.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historically, the development philosophy for the two Territories of Papua and New Guinea (known as TPNG, formerly two territories, Papua and New Guinea) was equated with economic development, with a focus on agricultural development. To achieve the modification or complete change in indigenous farming systems the Australian Government’s Department of External Territories adopted and utilised a programme based on agricultural extension. Prior to World War II, under Australian administration, the economic development of these two territories, as in many colonies of the time, was based on the institution of the plantation. Little was initiated in agriculture development for indigenous people. This changed after World War II to a rationale based on the promotion and advancement of primary industry, but also came to include indigenous farmers. To develop agriculture within a colony it was thought that a modification to, or in some cases the complete transformation of, existing farming systems was necessary to improve the material welfare of the population. It was also seen to be a guarantee for the future national interest of the sovereign state after independence was granted. The Didiman and Didimisis became the frontline, field operatives of this theoretical model of development. This thesis examines the Didiman’s field operations, the structural organisation of agricultural administration and the application of policy in the two territories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exchange of design models in the design and construction industry is evolving away from 2-dimensional computer-aided design (CAD) and paper towards semantically-rich 3-dimensional digital models. This approach, known as Building Information Modelling (BIM), is anticipated to become the primary means of information exchange between the various parties involved in construction projects. From a technical perspective, the domain represents an interesting study in model-based interoperability, since the models are large and complex, and the industry is one in which collaboration is a vital part of business. In this paper, we present our experiences with issues of model-based interoperability in exchanging building information models between various tools, and in implementing tools which consume BIM models, particularly using the industry standard IFC data modelling format. We report on the successes and challenges in these endeavours, as the industry endeavours to move further towards fully digitised information exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global climate change may induce accelerated soil organic matter (SOM) decomposition through increased soil temperature, and thus impact the C balance in soils. We hypothesized that compartmentalization of substrates and decomposers in the soil matrix would decrease SOM sensitivity to temperature. We tested our hypothesis with three short-term laboratory incubations with differing physical protection treatments conducted at different temperatures. Overall, CO2 efflux increased with temperature, but responses among physical protection treatments were not consistently different. Similar respiration quotient (Q(10)) values across physical protection treatments did not support our original hypothesis that the largest Q(10) values would be observed in the treatment with the least physical protection. Compartmentalization of substrates and decomposers is known to reduce the decomposability of otherwise labile material, but the hypothesized attenuation of temperature sensitivity was not detected, and thus the sensitivity is probably driven by the thermodynamics of biochemical reactions as expressed by Arrhenius-type equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Key resource areas (KRAs), defined as dry season foraging zones for herbivores, were studied relative to the more extensive outlying rangeland areas (non-KRAs) in Kenya. Field surveys with pastoralists, ranchers, scientists and government officials delineated KRAs on the ground. Identified KRAs were mapped based on global positioning and local experts' information on KRAs accessibility and ecological attributes. Using the map of known KRAs and non-KRAs, we examined characteristics of soils, climate, topography, land use/cover attributes at KRAs relative to non-KRAs. How and why do some areas (KRAs) support herbivores during droughts when forage is scarce in other areas of the landscape? We hypothesized that KRAs have fundamental ecological and socially determined attributes that enable them to provide forage during critical times and we sought to characterize some of those attributes in this study. At the landscape level, KRAs took different forms based on forage availability during the dry season but generally occurred in locations of the landscape with aseasonal water availability and/or difficult to access areas during wet season forage abundance. Greenness trends for KRAs versus non-KRAs were evaluated with a 22-year dataset of Normalized Difference Vegetation Index (NDVI). Field surveys of KRAs provided qualitative information on KRAs as dry season foraging zones. At the scale of the study, soil attributes did not significantly differ for KRAs compared to non-KRAs. Slopes of KRA were generally steeper compared to non-KRAs and elevation was higher at KRAs. Field survey respondents indicated that animals and humans generally avoid difficult to access hilly areas using them only when all other easily accessible rangeland is depleted of forage during droughts. Understanding the nature of KRAs will support identification, protection and restoration of critical forage hotspots for herbivores by strengthening rangeland inventory, monitoring, policy formulation, and conservation efforts to improve habitats and human welfare. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A polycaprolactone (PCL)–collagen electrospun mesh is proposed as a novel alternative to the conventional periosteal graft in autologous chondrocyte implantation. This is the first known attempt in designing a cartilage resurfacing membrane using a mechanically resilient PCL mesh with a weight-average molecular weight of 139 300 that is enhanced with bioactive collagen. PCL–collagen 10, 20 and 40% electrospun meshes (Coll-10, Coll-20 and Coll-40) were evaluated and it was discovered that the retention of surface collagen could only be achieved in Coll-20 and Coll-40. Furthermore Coll-20 was stiffer and stronger than Coll-40 and it satisfied the mechanical demands at the cartilage implant site. When seeded with mesenchymal stem cells (MSCs), the cells adhered on the surface of the Coll-20 mesh and they remained viable over a period of 28 days; however, they were unable to infiltrate through the dense meshwork. Cell compatibility was also noted in the chondrogenic environment as the MSCs differentiated into chondrocytes with the expression of Sox9, aggrecan and collagen II. More importantly, the mesh did not induce a hypertrophic response from the cells. The current findings support the use of Coll-20 as a cartilage patch, and future implantation studies are anticipated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a review of the state of the art relevant work on the use of public mobile data networks for aircraft telemetry and control proposes. Moreover, it describes the characterisation for airborne uses of the public mobile data communication systems known broadly as 3G. The motivation for this study was the explore how this mature public communication systems could be used for aviation purposes. An experimental system was fitted to a light aircraft to record communication latency, line speed, RF level, packet loss and cell tower identifier. Communications was established using internet protocols and connection was made to a local server. The aircraft was flown in both remote and populous areas at altitudes up to 8500 ft in a region located in South East Queensland, Australia. Results show that the average airborne RF levels are better than those on the ground by 21% and in the order of - 77dbm. Latencies were in the order of 500ms (1/2 the latency of Iridium), an average download speed of 0.48Mb/s, average uplink speed of 0.85Mb/s, a packet of information loss of 6.5%. The maximum communication range was also observed to be 70km from a single cell station. The paper also describes possible limitations and utility of using such communications architecture for both manned and unmanned aircraft systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The phenomenon that married men earn higher average wages than unmarried men, the so-called marriage premium, is well known. However, the robustness of the marriage premium across the wage distribution and the underlying causes of the marriage premium deserve closer scrutiny. Focusing on the entire wage distribution and employing recently developed semi-nonparametric tests for quantile treatment effects, our findings cast doubt on the robustness of the premium. We find that the premium is explained by selection above the median, whereas a positive premium is obtained only at very low wages. We argue that the causal effect at low wages is probably attributable to employer discrimination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The uncontrolled disposal of solid wastes poses an immediate threat to public health and a long term threat to the environmental well being of future generations. Solid waste is waste resulting from human activities that is solid and unwanted (Peavy et al., 1985). If unmanaged, dumped solid wastes generate liquid and gaseous emissions that are detrimental to the environment. This can lead to a serious form of contamination known as metal contamination, which poses a risk to human health and ecosystems. For example, some heavy metals (cadmium, chromium compounds, and nickel tetracarbonyl) are known to be highly toxic, and are aggressive at elevated concentrations. Iron, copper, and manganese can cause staining, and aluminium causes depositions and discolorations. In addition, calcium and magnesium cause hardness in water causing scale deposition and scum formation. Though not a metal but a metalloid, arsenic is poisonous at relatively high concentrations and when diluted at low concentrations causes skin cancer. Normally, metal contaminants are found in a dissolved form in the liquid percolating through landfills. Because average metal concentrations from full-scale landfills, test cells, and laboratory studies have tended to be generally low, metal contamination originating from landfills is not generally considered a major concern (Kjeldsen et al., 2002; Christensen et al., 1999). However, a number of factors make it necessary to take a closer look at metal contaminants from landfills. One of these factors relates to variability. Landfill leachate can have different qualities depending on the weather and operating conditions. Therefore, at one moment in time, metal contaminant concentrations may be quite low, but at a later time these concentrations could be quite high. Also, these conditions relate to the amount of leachate that is being generated. Another factor is biodiversity. It cannot be assumed that a particular metal contaminant is harmless to flora and fauna (including micro organisms) just because it is harmless to human health. This has significant implications for ecosystems and the environment. Finally, there is the moral factor. Because uncertainty surrounds the potential effects of metal contamination, it is appropriate to take precautions to prevent it from taking place. Consequently, it is necessary to have good scientific knowledge (empirically supported) to adequately understand the extent of the problem and improve the way waste is being disposed of

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years the development and use of crash prediction models for roadway safety analyses have received substantial attention. These models, also known as safety performance functions (SPFs), relate the expected crash frequency of roadway elements (intersections, road segments, on-ramps) to traffic volumes and other geometric and operational characteristics. A commonly practiced approach for applying intersection SPFs is to assume that crash types occur in fixed proportions (e.g., rear-end crashes make up 20% of crashes, angle crashes 35%, and so forth) and then apply these fixed proportions to crash totals to estimate crash frequencies by type. As demonstrated in this paper, such a practice makes questionable assumptions and results in considerable error in estimating crash proportions. Through the use of rudimentary SPFs based solely on the annual average daily traffic (AADT) of major and minor roads, the homogeneity-in-proportions assumption is shown not to hold across AADT, because crash proportions vary as a function of both major and minor road AADT. For example, with minor road AADT of 400 vehicles per day, the proportion of intersecting-direction crashes decreases from about 50% with 2,000 major road AADT to about 15% with 82,000 AADT. Same-direction crashes increase from about 15% to 55% for the same comparison. The homogeneity-in-proportions assumption should be abandoned, and crash type models should be used to predict crash frequency by crash type. SPFs that use additional geometric variables would only exacerbate the problem quantified here. Comparison of models for different crash types using additional geometric variables remains the subject of future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expert panels have been used extensively in the development of the "Highway Safety Manual" to extract research information from highway safety experts. While the panels have been used to recommend agendas for new and continuing research, their primary role has been to develop accident modification factors—quantitative relationships between highway safety and various highway safety treatments. Because the expert panels derive quantitative information in a “qualitative” environment and because their findings can have significant impacts on highway safety investment decisions, the expert panel process should be described and critiqued. This paper is the first known written description and critique of the expert panel process and is intended to serve professionals wishing to conduct such panels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.