930 resultados para Workflow Patterns, UML, Business Process Modelling
Resumo:
Altitudinal tree lines are mainly constrained by temperature, but can also be influenced by factors such as human activity, particularly in the European Alps, where centuries of agricultural use have affected the tree-line. Over the last decades this trend has been reversed due to changing agricultural practices and land-abandonment. We aimed to combine a statistical land-abandonment model with a forest dynamics model, to take into account the combined effects of climate and human land-use on the Alpine tree-line in Switzerland. Land-abandonment probability was expressed by a logistic regression function of degree-day sum, distance from forest edge, soil stoniness, slope, proportion of employees in the secondary and tertiary sectors, proportion of commuters and proportion of full-time farms. This was implemented in the TreeMig spatio-temporal forest model. Distance from forest edge and degree-day sum vary through feed-back from the dynamics part of TreeMig and climate change scenarios, while the other variables remain constant for each grid cell over time. The new model, TreeMig-LAb, was tested on theoretical landscapes, where the variables in the land-abandonment model were varied one by one. This confirmed the strong influence of distance from forest and slope on the abandonment probability. Degree-day sum has a more complex role, with opposite influences on land-abandonment and forest growth. TreeMig-LAb was also applied to a case study area in the Upper Engadine (Swiss Alps), along with a model where abandonment probability was a constant. Two scenarios were used: natural succession only (100% probability) and a probability of abandonment based on past transition proportions in that area (2.1% per decade). The former showed new forest growing in all but the highest-altitude locations. The latter was more realistic as to numbers of newly forested cells, but their location was random and the resulting landscape heterogeneous. Using the logistic regression model gave results consistent with observed patterns of land-abandonment: existing forests expanded and gaps closed, leading to an increasingly homogeneous landscape.
Resumo:
One hypothesis for the origin of alkaline lavas erupted on oceanic islands and in intracontinental settings is that they represent the melts of amphibole-rich veins in the lithosphere (or melts of their dehydrated equivalents if metasomatized lithosphere is recycled into the convecting mantle). Amphibole-rich veins are interpreted as cumulates produced by crystallization of low-degree melts of the underlying asthenosphere as they ascend through the lithosphere. We present the results of trace-element modelling of the formation and melting of veins formed in this way with the goal of testing this hypothesis and for predicting how variability in the formation and subsequent melting of such cumulates (and adjacent cryptically and modally metasomatized lithospheric peridotite) would be manifested in magmas generated by such a process. Because the high-pressure phase equilibria of hydrous near-solidus melts of garnet lherzolite are poorly constrained and given the likely high variability of the hypothesized accumulation and remelting processes, we used Monte Carlo techniques to estimate how uncertainties in the model parameters (e.g. the compositions of the asthenospheric sources, their trace-element contents, and their degree of melting; the modal proportions of crystallizing phases, including accessory phases, as the asthenospheric partial melts ascend and crystallize in the lithosphere; the amount of metasomatism of the peridotitic country rock; the degree of melting of the cumulates and the amount of melt derived from the metasomatized country rock) propagate through the process and manifest themselves as variability in the trace-element contents and radiogenic isotopic ratios of model vein compositions and erupted alkaline magma compositions. We then compare the results of the models with amphibole observed in lithospheric veins and with oceanic and continental alkaline magmas. While the trace-element patterns of the near-solidus peridotite melts, the initial anhydrous cumulate assemblage (clinopyroxene +/- garnet +/- olivine +/- orthopyroxene), and the modelled coexisting liquids do not match the patterns observed in alkaline lavas, our calculations show that with further crystallization and the appearance of amphibole (and accessory minerals such as rutile, ilmenite, apatite, etc.) the calculated cumulate assemblages have trace-element patterns that closely match those observed in the veins and lavas. These calculated hydrous cumulate assemblages are highly enriched in incompatible trace elements and share many similarities with the trace-element patterns of alkaline basalts observed in oceanic or continental setting such as positive Nb/La, negative Ce/Pb, and similiar slopes of the rare earth elements. By varying the proportions of trapped liquid and thus simulating the cryptic and modal metasomatism observed in peridotite that surrounds these veins, we can model the variations in Ba/Nb, Ce/Pb, and Nb/U ratios that are observed in alkaline basalts. If the isotopic compositions of the initial low-degree peridotite melts are similar to the range observed in mid-ocean ridge basalt, our model calculations produce cumulates that would have isotopic compositions similar to those observed in most alkaline ocean island basalt (OIB) and continental magmas after similar to 0 center dot 15 Gyr. However, to produce alkaline basalts with HIMU isotopic compositions requires much longer residence times (i.e. 1-2 Gyr), consistent with subduction and recycling of metasomatized lithosphere through the mantle. such as a heterogeneous asthenosphere. These modelling results support the interpretation proposed by various researchers that amphibole-bearing veins represent cumulates formed during the differentiation of a volatile-bearing low-degree peridotite melt and that these cumulates are significant components of the sources of alkaline OIB and continental magmas. The results of the forward models provide the potential for detailed tests of this class of hypotheses for the origin of alkaline magmas worldwide and for interpreting major and minor aspects of the geochemical variability of these magmas.
Resumo:
This thesis seeks to provide an understanding of contemporary Irish social drinking patterns by conducting a detailed analysis of the evolving sociological theories of alcohol consumption in Ireland. ‘Alcohol is a social drug which, to this day, evokes the divisive moral qualities that originated, or at least were solidified, in the last century with the birth of temperance movements’ (Cassidy, 1997:175). The temperance movement in Ireland under Father Mathew, a legacy which still reverberates in Irish society, served to further ingrain the ‘image of the whisky drinking Irishman’ (Ibid: 17). This is seen in such work as Stivers (1976) who uses sociological labelling theory to provide verification of a deviant Irish status, biologically, socially and culturally predisposed to alcohol. The author argues that these temperance movements sought to remove the linkages of alcohol and “Irishness” but this quasi-stigmatisation process created a “self-fulfilling prophecy”, which further abetted the legitimisation of alcohol within cultural spheres. The tourism industry, in connection with drink manufacturers, has had a monumental role in alcohol’s contemporary position within the upper echelons of Irish culture and heritage. Their hand in the commodification of “Stage Irishy”, seen as “craic”, has further entrenched the links between consumption of alcohol and the consumption of Irish Identity “McGovern, 2002). Furthermore, commercial interests are keen to cash in and maintain the dominance of alcohol in Irish society. This thesis concludes that this factor, in connection with the accelerated modernisation that Ireland has experienced since the mid-nineties, has malleable consequences for Irish society. As Keohane and Kuhling (2007) assert, post-modern consumption patterns of excess and ‘insatiability’ have been introduced into contemporary Irish drinking patterns and are affecting the nature of alcohol consumption in Ireland.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
La memòria mostra el procés de desenvolupament d'una aplicació sota el paradigma de J2EE. Per una banda descriu els passos realitzats en matèria d'especificació, anàlisis i disseny utilitzant UML com a eina fonamental de modelatge, per altra pretén mostrar que la utilització de frameworks i l'aplicació de patrons de disseny facilita substancialment el procés de desenvolupament.
Resumo:
Research has demonstrated that landscape or watershed scale processes can influence instream aquatic ecosystems, in terms of the impacts of delivery of fine sediment, solutes and organic matter. Testing such impacts upon populations of organisms (i.e. at the catchment scale) has not proven straightforward and differences have emerged in the conclusions reached. This is: (1) partly because different studies have focused upon different scales of enquiry; but also (2) because the emphasis upon upstream land cover has rarely addressed the extent to which such land covers are hydrologically connected, and hence able to deliver diffuse pollution, to the drainage network However, there is a third issue. In order to develop suitable hydrological models, we need to conceptualise the process cascade. To do this, we need to know what matters to the organism being impacted by the hydrological system, such that we can identify which processes need to be modelled. Acquiring such knowledge is not easy, especially for organisms like fish that might occupy very different locations in the river over relatively short periods of time. However, and inevitably, hydrological modellers have started by building up piecemeal the aspects of the problem that we think matter to fish. Herein, we report two developments: (a) for the case of sediment associated diffuse pollution from agriculture, a risk-based modelling framework, SCIMAP, has been developed, which is distinct because it has an explicit focus upon hydrological connectivity; and (b) we use spatially distributed ecological data to infer the processes and the associated process parameters that matter to salmonid fry. We apply the model to spatially distributed salmon and fry data from the River Eden, Cumbria, England. The analysis shows, quite surprisingly, that arable land covers are relatively unimportant as drivers of fry abundance. What matters most is intensive pasture, a land cover that could be associated with a number of stressors on salmonid fry (e.g. pesticides, fine sediment) and which allows us to identify a series of risky field locations, where this land cover is readily connected to the river system by overland flow. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Aim, Location Although the alpine mouse Apodemus alpicola has been given species status since 1989, no distribution map has ever been constructed for this endemic alpine rodent in Switzerland. Based on redetermined museum material and using the Ecological-Niche Factor Analysis (ENFA), habitat-suitability maps were computed for A. alpicola, and also for the co-occurring A. flavicollis and A. sylvaticus. Methods In the particular case of habitat suitability models, classical approaches (GLMs, GAMs, discriminant analysis, etc.) generally require presence and absence data. The presence records provided by museums can clearly give useful information about species distribution and ecology and have already been used for knowledge-based mapping. In this paper, we apply the ENFA which requires only presence data, to build a habitat-suitability map of three species of Apodemus on the basis of museum skull collections. Results Interspecific niche comparisons showed that A. alpicola is very specialized concerning habitat selection, meaning that its habitat differs unequivocally from the average conditions in Switzerland, while both A. flavicollis and A. sylvaticus could be considered as 'generalists' in the study area. Main conclusions Although an adequate sampling design is the best way to collect ecological data for predictive modelling, this is a time and money consuming process and there are cases where time is simply not available, as for instance with endangered species conservation. On the other hand, museums, herbariums and other similar institutions are treasuring huge presence data sets. By applying the ENFA to such data it is possible to rapidly construct a habitat suitability model. The ENFA method not only provides two key measurements regarding the niche of a species (i.e. marginality and specialization), but also has ecological meaning, and allows the scientist to compare directly the niches of different species.
Resumo:
This paper explores the plurality of institutional environments in which standards for the service sector are expected to support the rise of a global knowledge-based economy. Despite the careful wording of the World Trade Organization (WTO), a whole range of international bodies still have the capacity to define technical specifications affecting how services are expected to be traded on worldwide basis. The analysis relies on global political economy approaches to extend to the area of service standards the assumption that the process of globalization is not opposing states and markets, but a joint expression of both of them including new patterns and agents of structural change through formal and informal power and regulatory practices. It analyses on a cross-institutional basis patterns of authority in the institutional setting of service standards in the context of the International Organisation for Standardisation (ISO), the European Union, and the United States. In contrast to conventional views opposing the American system to the ISO/European framework, the paper questions the robustness of this opposition by showing that institutional developments of service standards are likely to face trade-offs and compromises across those systems and between two opposing models of standardisation.
Formulation and Implementation of Air Quality Control Pogrammes : Patterns of Interest Consideration
Resumo:
This article investigates some central aspects of the relationships between programme structure and implementation of sulphur dioxide air quality control policies. Previous implementation research, primarily adopting American approaches, has neglected the connections between the processes of programme formulation and implementation. 'Programme', as the key variable in implementation studies, has been defined too narrowly. On the basis of theoretical and conceptual reflections and provisional empirical results from studies in France, Italy, England, and the Federal Republic of Germany, the authors demonstrate that an integral process analysis using a more extended programme concept is necessary if patterns of interest recognition in policies are to be discovered. Otherwise, the still important question of critical social science cannot be answered, namely, what is the impact of special interests upon implementation processes.
Resumo:
Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.
Resumo:
ABSTRACT This dissertation focuses on new technology commercialization, innovation and new business development. Industry-based novel technology may achieve commercialization through its transfer to a large research laboratory acting as a lead user and technical partner, and providing the new technology with complementary assets and meaningful initial use in social practice. The research lab benefits from the new technology and innovation through major performance improvements and cost savings. Such mutually beneficial collaboration between the lab and the firm does not require any additional administrative efforts or funds from the lab, yet requires openness to technologies and partner companies that may not be previously known to the lab- Labs achieve the benefits by applying a proactive procurement model that promotes active pre-tender search of new technologies and pre-tender testing and piloting of these technological options. The collaboration works best when based on the development needs of both parties. This means that first of all the lab has significant engineering activity with well-defined technological needs and second, that the firm has advanced prototype technology yet needs further testing, piloting and the initial market and references to achieve the market breakthrough. The empirical evidence of the dissertation is based on a longitudinal multiple-case study with the European Laboratory for Particle Physics. The key theoretical contribution of this study is that large research labs, including basic research, play an important role in product and business development toward the end, rather than front-end, of the innovation process. This also implies that product-orientation and business-orientation can contribute to basic re-search. The study provides practical managerial and policy guidelines on how to initiate and manage mutually beneficial lab-industry collaboration and proactive procurement.
Resumo:
BACKGROUND: Adherence to combination antiretroviral therapy (cART) is a dynamic process, however, changes in adherence behavior over time are insufficiently understood. METHODS: Data on self-reported missed doses of cART was collected every 6 months in Swiss HIV Cohort Study participants. We identified behavioral groups associated with specific cART adherence patterns using trajectory analyses. Repeated measures logistic regression identified predictors of changes in adherence between consecutive visits. RESULTS: Six thousand seven hundred nine individuals completed 49,071 adherence questionnaires [median 8 (interquartile range: 5-10)] during a median follow-up time of 4.5 years (interquartile range: 2.4-5.1). Individuals were clustered into 4 adherence groups: good (51.8%), worsening (17.4%), improving (17.6%), and poor adherence (13.2%). Independent predictors of worsening adherence were younger age, basic education, loss of a roommate, starting intravenous drug use, increasing alcohol intake, depression, longer time with HIV, onset of lipodystrophy, and changing care provider. Independent predictors of improvements in adherence were regimen simplification, changing class of cART, less time on cART, and starting comedications. CONCLUSIONS: Treatment, behavioral changes, and life events influence patterns of drug intake in HIV patients. Clinical care providers should routinely monitor factors related to worsening adherence and intervene early to reduce the risk of treatment failure and drug resistance.
Resumo:
Reverse transcriptase (RT) is a multifunctional enzyme in the human immunodeficiency virus (HIV)-1 life cycle and represents a primary target for drug discovery efforts against HIV-1 infection. Two classes of RT inhibitors, the nucleoside RT inhibitors (NRTIs) and the nonnucleoside transcriptase inhibitors are prominently used in the highly active antiretroviral therapy in combination with other anti-HIV drugs. However, the rapid emergence of drug-resistant viral strains has limited the successful rate of the anti-HIV agents. Computational methods are a significant part of the drug design process and indispensable to study drug resistance. In this review, recent advances in computer-aided drug design for the rational design of new compounds against HIV-1 RT using methods such as molecular docking, molecular dynamics, free energy calculations, quantitative structure-activity relationships, pharmacophore modelling and absorption, distribution, metabolism, excretion and toxicity prediction are discussed. Successful applications of these methodologies are also highlighted.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This research was based on a study of social enterprises in Brazil, to find out if and how these organizations plan and manage the succession process for their senior positions. The study investigated the subset of the associations dedicated to collectively producing goods and services, because they are formally set up and aimed at speeding up the dynamism of local development. The empirical research consisted of two stages. The first was a survey covering a sample of 378 organizations, to find out which of those had already undergone or were undergoing a succession process. The second interviewed the main manager of 32 organizations, to obtain a description of their succession experience. In this stage, the research aimed to analyze how the Individual, Organization and Environment dimensions interact to configure the succession process, identifying which factors of each of these dimensions can facilitate or limit this process. The following guiding elements were taken as the analytical basis: Individual dimension - leadership roles, skill and styles; Organization dimension - structure, planning, advisory boards, communication (transparency), control and evaluation; and Environment dimension - influence of the stakeholders (community, suppliers, clients, and business partners) on the succession process. The results indicated that succession in the researched associations is in the construction stage: it adapts to the requirements of current circumstances but is evidently in need of improvement in order for more effective planning and shared management of the process to be achieved.