860 resultados para conceptual data modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Publicado em "Information control in manufacturing 1998 : (INCOM'98) : advances in industrial engineering : a proceedings volume from the 9th IFAC Symposium, Nancy-Metz, France, 24-26 June 1998. Vol. 2"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical studies on industrial location do not typically distinguish between new and relocated establishments. This paper addresses this shortcoming using data on the frequency of these events in municipalities of the same economic-administrative region. This enables us to test not only for differences in their determinants but also for interrelations between start-ups and relocations. Estimates from count regression models for cross-section and panel data show that, although partial effects differ, common patterns arise in “institutional” and “neoclassical” explanatory factors. Also, start-ups and relocations are positive but asymmetrically related. JEL classification: C25, R30, R10. Keywords: cities, count data models, industrial location

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report for the scientific sojourn at the Simon Fraser University, Canada, from July to September 2007. General context: landscape change during the last years is having significant impacts on biodiversity in many Mediterranean areas. Land abandonment, urbanisation and specially fire are profoundly transforming large areas in the Western Mediterranean basin and we know little on how these changes influence species distribution and in particular how these species will respond to further change in a context of global change including climate. General objectives: integrate landscape and population dynamics models in a platform allowing capturing species distribution responses to landscape changes and assessing impact on species distribution of different scenarios of further change. Specific objective 1: develop a landscape dynamic model capturing fire and forest succession dynamics in Catalonia and linked to a stochastic landscape occupancy (SLOM) (or spatially explicit population, SEPM) model for the Ortolan bunting, a species strongly linked to fire related habitat in the region. Predictions from the occupancy or spatially explicit population Ortolan bunting model (SEPM) should be evaluated using data from the DINDIS database. This database tracks bird colonisation of recently burnt big areas (&50 ha). Through a number of different SEPM scenarios with different values for a number of parameter, we should be able to assess different hypothesis in factors driving bird colonisation in new burnt patches. These factors to be mainly, landscape context (i.e. difficulty to reach the patch, and potential presence of coloniser sources), dispersal constraints, type of regenerating vegetation after fire, and species characteristics (niche breadth, etc).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of land cover change as a significant component of global change has become increasingly recognized in recent decades. Large databases measuring land cover change, and the data which can potentially be used to explain the observed changes, are also becoming more commonly available. When developing statistical models to investigate observed changes, it is important to be aware that the chosen sampling strategy and modelling techniques can influence results. We present a comparison of three sampling strategies and two forms of grouped logistic regression models (multinomial and ordinal) in the investigation of patterns of successional change after agricultural land abandonment in Switzerland. Results indicated that both ordinal and nominal transitional change occurs in the landscape and that the use of different sampling regimes and modelling techniques as investigative tools yield different results. Synthesis and applications. Our multimodel inference identified successfully a set of consistently selected indicators of land cover change, which can be used to predict further change, including annual average temperature, the number of already overgrown neighbouring areas of land and distance to historically destructive avalanche sites. This allows for more reliable decision making and planning with respect to landscape management. Although both model approaches gave similar results, ordinal regression yielded more parsimonious models that identified the important predictors of land cover change more efficiently. Thus, this approach is favourable where land cover change pattern can be interpreted as an ordinal process. Otherwise, multinomial logistic regression is a viable alternative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In multilevel modelling, interest in modeling the nested structure of hierarchical data has been accompanied by increasing attention to different forms of spatial interactions across different levels of the hierarchy. Neglecting such interactions is likely to create problems of inference, which typically assumes independence. In this paper we review approaches to multilevel modelling with spatial effects, and attempt to connect the two literatures, discussing the advantages and limitations of various approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Åknes is an active complex large rockslide of approximately 30?40 Mm3 located within the Proterozoic gneisses of western Norway. The observed surface displacements indicate that this rockslide is divided into several blocks moving in different directions at velocities of between 3 and 10 cm year?1. Because of regional safety issues and economic interests this rockslide has been extensively monitored since 2004. The understanding of the deformation mechanism is crucial for the implementation of a viable monitoring system. Detailed field investigations and the analysis of a digital elevation model (DEM) indicate that the movements and the block geometry are controlled by the main schistosity (S1) in gneisses, folds, joints and regional faults. Such complex slope deformations use pre-existing structures, but also result in new failure surfaces and deformation zones, like preferential rupture in fold-hinge zones. Our interpretation provides a consistent conceptual three-dimensional (3D) model for the movements measured by various methods that is crucial for numerical stability modelling. In addition, this reinterpretation of the morphology confirms that in the past several rockslides occurred from the Åknes slope. They may be related to scars propagating along the vertical foliation in folds hinges. Finally, a model of the evolution of the Åknes slope is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On December 4th 2007, a 3-Mm3 landslide occurred along the northwestern shore of Chehalis Lake. The initiation zone is located at the intersection of the main valley slope and the northern sidewall of a prominent gully. The slope failure caused a displacement wave that ran up to 38 m on the opposite shore of the lake. The landslide is temporally associated with a rain-on-snow meteorological event which is thought to have triggered it. This paper describes the Chehalis Lake landslide and presents a comparison of discontinuity orientation datasets obtained using three techniques: field measurements, terrestrial photogrammetric 3D models and an airborne LiDAR digital elevation model to describe the orientation and characteristics of the five discontinuity sets present. The discontinuity orientation data are used to perform kinematic, surface wedge limit equilibrium and three-dimensional distinct element analyses. The kinematic and surface wedge analyses suggest that the location of the slope failure (intersection of the valley slope and a gully wall) has facilitated the development of the unstable rock mass which initiated as a planar sliding failure. Results from the three-dimensional distinct element analyses suggest that the presence, orientation and high persistence of a discontinuity set dipping obliquely to the slope were critical to the development of the landslide and led to a failure mechanism dominated by planar sliding. The three-dimensional distinct element modelling also suggests that the presence of a steeply dipping discontinuity set striking perpendicular to the slope and associated with a fault exerted a significant control on the volume and extent of the failed rock mass but not on the overall stability of the slope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Landscape modification is often considered the principal cause of population decline in many bat species. Thus, schemes for bat conservation rely heavily on knowledge about species-landscape relationships. So far, however, few studies have quantified the possible influence of landscape structure on large-scale spatial patterns in bat communities. 2. This study presents quantitative models that use landscape structure to predict (i) spatial patterns in overall community composition and (ii) individual species' distributions through canonical correspondence analysis and generalized linear models, respectively. A geographical information system (GIS) was then used to draw up maps of (i) overall community patterns and (ii) distribution of potential species' habitats. These models relied on field data from the Swiss Jura mountains. 3. Fight descriptors of landscape structure accounted for 30% of the variation in bat community composition. For some species, more than 60% of the variance in distribution could be explained by landscape structure. Elevation, forest or woodland cover, lakes and suburbs, were the most frequent predictors. 4. This study shows that community composition in bats is related to landscape structure through species-specific relationships to resources. Due to their nocturnal activities and the difficulties of remote identification, a comprehensive bat census is rarely possible, and we suggest that predictive modelling of the type described here provides an indispensable conservation tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The carbohydrate-binding specificity of lectins from the seeds of Canavalia maritima and Dioclea grandiflora was studied by hapten-inhibition of haemagglutination using various sugars and sugar derivatives as inhibitors, including N-acetylneuraminic acid and N-acetylmuramic acid. Despite some discrepancies, both lectins exhibited a very similar carbohydrate-binding specificity as previously reported for other lectins from Diocleinae (tribe Phaseoleae, sub-tribe Diocleinae). Accordingly, both lectins exhibited almost identical hydropathic profiles and their three-dimensional models built up from the atomic coordinates of ConA looked very similar. However, docking experiments of glucose and mannose in their monosaccharide-binding sites, by comparison with the ConA-mannose complex used as a model, revealed conformational changes in side chains of the amino acid residues involved in the binding of monosaccharides. These results fully agree with crystallographic data showing that binding of specific ligands to ConA requires conformational chances of its monosaccharide-binding site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the tremendous amount of data collected in the field of ambulatory care, political authorities still lack synthetic indicators to provide them with a global view of health services utilization and costs related to various types of diseases. Moreover, public health indicators fail to provide useful information for physicians' accountability purposes. The approach is based on the Swiss context, which is characterized by the greatest frequency of medical visits in Europe, the highest rate of growth for care expenditure, poor public information but a lot of structured data (new fee system introduced in 2004). The proposed conceptual framework is universal and based on descriptors of six entities: general population, people with poor health, patients, services, resources and effects. We show that most conceptual shortcomings can be overcome and that the proposed indicators can be achieved without threatening privacy protection, using modern cryptographic techniques. Twelve indicators are suggested for the surveillance of the ambulatory care system, almost all based on routinely available data: morbidity, accessibility, relevancy, adequacy, productivity, efficacy (from the points of view of the population, people with poor health, and patients), effectiveness, efficiency, health services coverage and financing. The additional costs of this surveillance system should not exceed Euro 2 million per year (Euro 0.3 per capita).