910 resultados para Process-based model (PBM)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grigorij Kreidlin (Russia). A Comparative Study of Two Semantic Systems: Body Russian and Russian Phraseology. Mr. Kreidlin teaches in the Department of Theoretical and Applied Linguistics of the State University of Humanities in Moscow and worked on this project from August 1996 to July 1998. The classical approach to non-verbal and verbal oral communication is based on a traditional separation of body and mind. Linguists studied words and phrasemes, the products of mind activities, while gestures, facial expressions, postures and other forms of body language were left to anthropologists, psychologists, physiologists, and indeed to anyone but linguists. Only recently have linguists begun to turn their attention to gestures and semiotic and cognitive paradigms are now appearing that raise the question of designing an integral model for the unified description of non-verbal and verbal communicative behaviour. This project attempted to elaborate lexical and semantic fragments of such a model, producing a co-ordinated semantic description of the main Russian gestures (including gestures proper, postures and facial expressions) and their natural language analogues. The concept of emblematic gestures and gestural phrasemes and of their semantic links permitted an appropriate description of the transformation of a body as a purely physical substance into a body as a carrier of essential attributes of Russian culture - the semiotic process called the culturalisation of the human body. Here the human body embodies a system of cultural values and displays them in a text within the area of phraseology and some other important language domains. The goal of this research was to develop a theory that would account for the fundamental peculiarities of the process. The model proposed is based on the unified lexicographic representation of verbal and non-verbal units in the Dictionary of Russian Gestures, which the Mr. Kreidlin had earlier complied in collaboration with a group of his students. The Dictionary was originally oriented only towards reflecting how the lexical competence of Russian body language is represented in the Russian mind. Now a special type of phraseological zone has been designed to reflect explicitly semantic relationships between the gestures in the entries and phrasemes and to provide the necessary information for a detailed description of these. All the definitions, rules of usage and the established correlations are written in a semantic meta-language. Several classes of Russian gestural phrasemes were identified, including those phrasemes and idioms with semantic definitions close to those of the corresponding gestures, those phraseological units that have lost touch with the related gestures (although etymologically they are derived from gestures that have gone out of use), and phrasemes and idioms which have semantic traces or reflexes inherited from the meaning of the related gestures. The basic assumptions and practical considerations underlying the work were as follows. (1) To compare meanings one has to be able to state them. To state the meaning of a gesture or a phraseological expression, one needs a formal semantic meta-language of propositional character that represents the cognitive and mental aspects of the codes. (2) The semantic contrastive analysis of any semiotic codes used in person-to-person communication also requires a single semantic meta-language, i.e. a formal semantic language of description,. This language must be as linguistically and culturally independent as possible and yet must be open to interpretation through any culture and code. Another possible method of conducting comparative verbal-non-verbal semantic research is to work with different semantic meta-languages and semantic nets and to learn how to combine them, translate from one to another, etc. in order to reach a common basis for the subsequent comparison of units. (3) The practical work in defining phraseological units and organising the phraseological zone in the Dictionary of Russian Gestures unexpectedly showed that semantic links between gestures and gestural phrasemes are reflected not only in common semantic elements and syntactic structure of semantic propositions, but also in general and partial cognitive operations that are made over semantic definitions. (4) In comparative semantic analysis one should take into account different values and roles of inner form and image components in the semantic representation of non-verbal and verbal units. (5) For the most part, gestural phrasemes are direct semantic derivatives of gestures. The cognitive and formal techniques can be regarded as typological features for the future functional-semantic classification of gestural phrasemes: two phrasemes whose meaning can be obtained by the same cognitive or purely syntactic operations (or types of operations) over the meanings of the corresponding gestures, belong by definition to one and the same class. The nature of many cognitive operations has not been studied well so far, but the first steps towards its comprehension and description have been taken. The research identified 25 logically possible classes of relationships between a gesture and a gestural phraseme. The calculation is based on theoretically possible formal (set-theory) correlations between signifiers and signified of the non-verbal and verbal units. However, in order to examine which of them are realised in practice a complete semantic and lexicographic description of all (not only central) everyday emblems and gestural phrasemes is required and this unfortunately does not yet exist. Mr. Kreidlin suggests that the results of the comparative analysis of verbal and non-verbal units could also be used in other research areas such as the lexicography of emotions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change is expected to profoundly influence the hydrosphere of mountain ecosystems. The focus of current process-based research is centered on the reaction of glaciers and runoff to climate change; spatially explicit impacts on soil moisture remain widely neglected. We spatio-temporally analyzed the impact of the climate on soil moisture in a mesoscale high mountain catchment to facilitate the development of mitigation and adaptation strategies at the level of vegetation patterns. Two regional climate models were downscaled using three different approaches (statistical downscaling, delta change, and direct use) to drive a hydrological model (WaSiM-ETH) for reference and scenario period (1960–1990 and 2070–2100), resulting in an ensemble forecast of six members. For all ensembles members we found large changes in temperature, resulting in decreasing snow and ice storage and earlier runoff, but only small changes in evapotranspiration. The occurrence of downscaled dry spells was found to fluctuate greatly, causing soil moisture depletion and drought stress potential to show high variability in both space and time. In general, the choice of the downscaling approach had a stronger influence on the results than the applied regional climate model. All of the results indicate that summer soil moisture decreases, which leads to more frequent declines below a critical soil moisture level and an advanced evapotranspiration deficit. Forests up to an elevation of 1800 m a.s.l. are likely to be threatened the most, while alpine areas and most pastures remain nearly unaffected. Nevertheless, the ensemble variability was found to be extremely high and should be interpreted as a bandwidth of possible future drought stress situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel solution to the long standing issue of chip entanglement and breakage in metal cutting is presented in this dissertation. Through this work, an attempt is made to achieve universal chip control in machining by using chip guidance and subsequent breakage by backward bending (tensile loading of the chip's rough top surface) to effectively control long continuous chips into small segments. One big limitation of using chip breaker geometries in disposable carbide inserts is that the application range is limited to a narrow band depending on cutting conditions. Even within a recommended operating range, chip breakers do not function effectively as designed due to the inherent variations of the cutting process. Moreover, for a particular process, matching the chip breaker geometry with the right cutting conditions to achieve effective chip control is a very iterative process. The existence of a large variety of proprietary chip breaker designs further exacerbates the problem of easily implementing a robust and comprehensive chip control technique. To address the need for a robust and universal chip control technique, a new method is proposed in this work. By using a single tool top form geometry coupled with a tooling system for inducing chip breaking by backward bending, the proposed method achieves comprehensive chip control over a wide range of cutting conditions. A geometry based model is developed to predict a variable edge inclination angle that guides the chip flow to a predetermined target location. Chip kinematics for the new tool geometry is examined via photographic evidence from experimental cutting trials. Both qualitative and quantitative methods are used to characterize the chip kinematics. Results from the chip characterization studies indicate that the chip flow and final form show a remarkable consistency across multiple levels of workpiece and tool configurations as well as cutting conditions. A new tooling system is then designed to comprehensively break the chip by backward bending. Test results with the new tooling system prove that by utilizing the chip guidance and backward bending mechanism, long continuous chips can be more consistently broken into smaller segments that are generally deemed acceptable or good chips. It is found that the proposed tool can be applied effectively over a wider range of cutting conditions than present chip breakers thus taking possibly the first step towards achieving universal chip control in machining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most accounts of child language acquisition use as analytic tools adult-like syntactic categories and schemas (formal grammars) with little concern for whether they are psychologically real for young children. Recent research has demonstrated, however, that children do not operate initially with such abstract linguistic entities, but instead operate on the basis of concrete, item-based constructions. Children construct more abstract linguistic constructions only gradually – on the basis of linguistic experience in which frequency plays a key role – and they constrain these constructions to their appropriate ranges of use only gradually as well – again on the basis of linguistic experience in which frequency plays a key role. The best account of first language acquisition is provided by a construction-based, usage-based model in which children process the language they experience in discourse interactions with other persons, relying explicitly and exclusively on social and cognitive skills that children of this age are known to possess.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heutzutage stehen zunehmend – z.B. durch den raschen Fortschritt bei den bildgebenden Verfahren – digitale Datensätze im Dentalbereich zur Verfügung. CAD/CAM-syteme gehören dabei in der Zahntechnik längst zum Stande der Technik. Für die Anwendung derartiger Systeme ist jedoch ein Gipsmodell nötig, welches zum Beginn der Prozesskette vom Zahntechniker mittels eines optischen Scanners digitalisiert wird. Die Weiterentwicklung intraoraler Scanner ermöglicht heutzutage außerdem die Digitalisierung ganzer Kiefer im Patientenmund durch den Zahnarzt. Insbesondere für z.B. die ästhetischen Restaurationen bildet hier das zahntechnische Modell nach wie vor die unersetzliche Arbeitsgrundlage für den Techniker. In der vorliegenden Arbeit wird dazu ein Rapid Manufacturing Verfahren zur Herstellung von Dentalmodellen auf Basis der Stereolithographie vorgestellt. Dabei wird auf die besonderen Anforderungen hinsichtlich Präzision, Robustheit und Wirtschaftlichkeit von generativen Fertigungsverfahren für dentale Applikationen eingegangen und eine neu entwickelte Baustrategie vorgestellt, mittels derer die o.g. Anforderungen erfüllt werden

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The induction of late long-term potentiation (L-LTP) involves complex interactions among second-messenger cascades. To gain insights into these interactions, a mathematical model was developed for L-LTP induction in the CA1 region of the hippocampus. The differential equation-based model represents actions of protein kinase A (PKA), MAP kinase (MAPK), and CaM kinase II (CAMKII) in the vicinity of the synapse, and activation of transcription by CaM kinase IV (CAMKIV) and MAPK. L-LTP is represented by increases in a synaptic weight. Simulations suggest that steep, supralinear stimulus-response relationships between stimuli (e.g., elevations in [Ca(2+)]) and kinase activation are essential for translating brief stimuli into long-lasting gene activation and synaptic weight increases. Convergence of multiple kinase activities to induce L-LTP helps to generate a threshold whereby the amount of L-LTP varies steeply with the number of brief (tetanic) electrical stimuli. The model simulates tetanic, -burst, pairing-induced, and chemical L-LTP, as well as L-LTP due to synaptic tagging. The model also simulates inhibition of L-LTP by inhibition of MAPK, CAMKII, PKA, or CAMKIV. The model predicts results of experiments to delineate mechanisms underlying L-LTP induction and expression. For example, the cAMP antagonist RpcAMPs, which inhibits L-LTP induction, is predicted to inhibit ERK activation. The model also appears useful to clarify similarities and differences between hippocampal L-LTP and long-term synaptic strengthening in other systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Serial correlation of extreme midlatitude cyclones observed at the storm track exits is explained by deviations from a Poisson process. To model these deviations, we apply fractional Poisson processes (FPPs) to extreme midlatitude cyclones, which are defined by the 850 hPa relative vorticity of the ERA interim reanalysis during boreal winter (DJF) and summer (JJA) seasons. Extremes are defined by a 99% quantile threshold in the grid-point time series. In general, FPPs are based on long-term memory and lead to non-exponential return time distributions. The return times are described by a Weibull distribution to approximate the Mittag–Leffler function in the FPPs. The Weibull shape parameter yields a dispersion parameter that agrees with results found for midlatitude cyclones. The memory of the FPP, which is determined by detrended fluctuation analysis, provides an independent estimate for the shape parameter. Thus, the analysis exhibits a concise framework of the deviation from Poisson statistics (by a dispersion parameter), non-exponential return times and memory (correlation) on the basis of a single parameter. The results have potential implications for the predictability of extreme cyclones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The medical training model is currently immersed in a process of change. The new paradigm is intended to be more effective, more integrated within the healthcare system, and strongly oriented towards the direct application of knowledge to clinical practice. Compared with the established training system based on certification of the completion of a series or rotations and stays in certain healthcare units, the new model proposes a more structured training process based on the gradual acquisition of specific competences, in which residents must play an active role in designing their own training program. Training based on competences guarantees more transparent, updated and homogeneous learning of objective quality, and which can be homologated internationally. The tutors play a key role as the main directors of the process, and institutional commitment to their work is crucial. In this context, tutors should receive time and specific formation to allow the evaluation of training as the cornerstone of the new model. New forms of objective summative and training evaluation should be introduced to guarantee that the predefined competences and skills are effectively acquired. The free movement of specialists within Europe is very desirable and implies that training quality must be high and amenable to homologation among the different countries. The Competency Based training in Intensive Care Medicine in Europe program is our main reference for achieving this goal. Scientific societies in turn must impulse and facilitate all those initiatives destined to improve healthcare quality and therefore specialist training. They have the mission of designing strategies and processes that favor training, accreditation and advisory activities with the government authorities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acid rock drainage (ARD) is a problem of international relevance with substantial environmental and economic implications. Reactive transport modeling has proven a powerful tool for the process-based assessment of metal release and attenuation at ARD sites. Although a variety of models has been used to investigate ARD, a systematic model intercomparison has not been conducted to date. This contribution presents such a model intercomparison involving three synthetic benchmark problems designed to evaluate model results for the most relevant processes at ARD sites. The first benchmark (ARD-B1) focuses on the oxidation of sulfide minerals in an unsaturated tailing impoundment, affected by the ingress of atmospheric oxygen. ARD-B2 extends the first problem to include pH buffering by primary mineral dissolution and secondary mineral precipitation. The third problem (ARD-B3) in addition considers the kinetic and pH-dependent dissolution of silicate minerals under low pH conditions. The set of benchmarks was solved by four reactive transport codes, namely CrunchFlow, Flotran, HP1, and MIN3P. The results comparison focused on spatial profiles of dissolved concentrations, pH and pE, pore gas composition, and mineral assemblages. In addition, results of transient profiles for selected elements and cumulative mass loadings were considered in the intercomparison. Despite substantial differences in model formulations, very good agreement was obtained between the various codes. Residual deviations between the results are analyzed and discussed in terms of their implications for capturing system evolution and long-term mass loading predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synopsis: Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. Our study seeks to reveal and analyse the causes for professionalization processes in international sport federations, the forms resulting from it, as well as related consequences. Abstract: AIM OF ABSTRACT/PAPER - RESEARCH QUESTION Sport organisations are facing multiple challenges originating from an increasingly complex and dynamic environment in general, and from internal changes in particular. In this context, professionalization seems to have been adopted by sport organisations as an appropriate strategy to respond to pressures such as becoming more “business-like”. The ongoing study seeks to reveal and analyse the internal and external causes for professionalization processes in international sport federations, the forms resulting from it (e.g. organisational, managerial, economic) as well as related consequences on objectives, values, governance methods, performance management or again rationalisation. THEORETICAL BACKGROUND/LITERATURE REVIEW Studies on sport as specific non-profit sector mainly focus on the prospect of the “professionalization of individuals” (Thibault, Slack & Hinings, 1991), often within sport clubs (Thiel, Meier & Cachay, 2006) and national sport federations (Seippel, 2002) or on organisational change (Griginov & Sandanski, 2008; Slack & Hinings, 1987, 1992; Slack, 1985, 2001), thus leaving broader analysis on governance, management and professionalization in sport organisations an unaccomplished task. In order to further current research on above-mentioned topics, our intention is to analyse causes, forms and consequences of professionalisation processes in international sport federations. The social theory of action (Coleman, 1986; Esser, 1993) has been defined as appropriate theoretical framework, deriving in the following a multi-level framework for the analysis of sport organisations (Nagel, 2007). In light of the multi-level framework, sport federations are conceptualised as corporative actors whose objectives are defined and implemented with regard to the interests of member organisations (Heinemann, 2004) and/or other pressure groups. In order to understand social acting and social structures (Giddens 1984) of sport federations, two levels are in the focus of our analysis: the macro level examining the environment at large (political, social, economic systems etc.) and the meso level (Esser, 1999) examining organisational structures, actions and decisions of the federation’s headquarter as well as member organisations. METHODOLOGY, RESEARCH DESIGN AND DATA ANALYSIS The multi-level framework mentioned seeks to gather and analyse information on causes, forms and consequences of professionalization processes in sport federations. It is applied in a twofold approach: first an exploratory study based on nine semi-structured interviews with experts from umbrella sport organisations (IOC, WADA, ASOIF, AIOWF, etc.) as well as the analysis of related documents, relevant reports (IOC report 2000 on governance reform, Agenda 2020, etc.) and important moments of change in the Olympic Movement (Olympic revenue share, IOC evaluation criteria, etc.); and secondly several case studies. Whereas the exploratory study seeks more the causes for professionalization on an external, internal and headquarter level as depicted in the literature, the case studies rather focus on forms and consequences. Applying our conceptual framework, the analysis of forms is built around three dimensions: 1) Individuals (persons and positions), 2) Processes, structures (formalisation, specialisation), 3) Activities (strategic planning). With regard to consequences, we centre our attention on expectations of and relationships with stakeholders (e.g. cooperation with business partners), structure, culture and processes (e.g. governance models, performance), and expectations of and relationships with member organisations (e.g. centralisation vs. regionalisation). For the case studies, a mixed-method approach is applied to collect relevant data: questionnaires for rather quantitative data, interviews for rather qualitative data, as well as document and observatory analysis. RESULTS, DISCUSSION AND IMPLICATIONS/CONCLUSIONS With regard to causes of professionalization processes, we analyse the content of three different levels: 1. the external level, where the main pressure derives from financial resources (stakeholders, benefactors) and important turning points (scandals, media pressure, IOC requirements for Olympic sports); 2. the internal level, where pressure from member organisations turned out to be less decisive than assumed (little involvement of member organisations in decision-making); 3. the headquarter level, where specific economic models (World Cups, other international circuits, World Championships), and organisational structures (decision-making procedures, values, leadership) trigger or hinder a federation’s professionalization process. Based on our first analysis, an outline for an economic model is suggested, distinguishing four categories of IFs: “money-generating IFs” being rather based on commercialisation and strategic alliances; “classical Olympic IFs” being rather reactive and dependent on Olympic revenue; “classical non-Olympic IFs” being rather independent of the Olympic Movement; and “money-receiving IFs” being dependent on benefactors and having strong traditions and values. The results regarding forms and consequences will be outlined in the presentation. The first results from the two pilot studies will allow us to refine our conceptual framework for subsequent case studies, thus extending our data collection and developing fundamental conclusions. References: Bayle, E., & Robinson, L. (2007). A framework for understanding the performance of national governing bodies of sport. European Sport Management Quarterly, 7, 249–268 Chantelat, P. (2001). La professionnalisation des organisations sportives: Nouveaux débats, nouveaux enjeux [Professionalisation of sport organisations]. Paris: L’Harmattan. Dowling, M., Edwards, J., & Washington, M. (2014). Understanding the concept of professionalization in sport management research. Sport Management Review. Advance online publication. doi: 10.1016/j.smr.2014.02.003 Ferkins, L. & Shilbury, D. (2012). Good Boards Are Strategic: What Does That Mean for Sport Governance? Journal of Sport Management, 26, 67-80. Thibault, L., Slack, T., & Hinings, B. (1991). Professionalism, structures and systems: The impact of professional staff on voluntary sport organizations. International Review for the Sociology of Sport, 26, 83–97.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On-orbit exposures can come from numerous factors related to the space environment as evidenced by almost 50 years of environmental samples collected for water analysis, air analysis, radiation analysis, and physiologic parameters. For astronauts and spaceflight participants the occupational exposures can be very different from those experienced by workers performing similar tasks in workplaces on Earth, because the duration of the exposure could be continuous for very long orbital, and eventually interplanetary, missions. The establishment of long-term exposure standards is vital to controlling the quality of the spacecraft environment over long periods. NASA often needs to update and revise its prior exposure standards (Spacecrafts Maximum Allowable Concentrations (SMACs)). Traditional standards-setting processes are often lengthy, so a more rapid method to review and establish standards would be a substantial advancement in this area. This project investigates use of the Delphi method for this purpose. ^ In order to achieve the objectives of this study a modified Delphi methodology was tested in three trials executed by doctoral students and a panel of experts in disciplines related to occupational safety and health. During each test/trial modifications were made to the methodology. Prior to submission of the Delphi Questionnaire to the panel of experts a pilot study/trial was conducted using five doctoral students with the goals of testing and adjusting the Delphi questionnaire to improve comprehension, work out any procedural issues and evaluate the effectiveness of the questionnaire in drawing the desired responses. The remainder of the study consisted of two trials of the Modified Delphi process using 6 chemicals that currently have the potential of causing occupational exposures to NASA astronauts or spaceflight participants. To assist in setting Occupational Exposure Limits (OEL), the expert panel was established consisting of experts from academia, government and industry. Evidence was collected and used to create close-ended questionnaires which were submitted to the Delphi panel of experts for the establishment of OEL values for three chemicals from the list of six originally selected (trial 1). Once the first Delphi trial was completed, adjustments were made to the Delphi questionnaires and the process above was repeated with the remaining 3 chemicals (trial 2). ^ Results indicate that experience in occupational safety and health and with OEL methodologies can have a positive effect in minimizing the time experts take in completing this process. Based on the results of the questionnaires and comparison of the results with the SMAC already established by NASA, we conclude that use of the Delphi methodology is appropriate for use in the decision-making process for the selection of OELs.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Major and rare earth element (REE) data for basalts from Holes 483, 483B, and 485A of DSDP Leg 65, East Pacific Rise, mouth of the Gulf of California, support a simple fractional crystallization model for the genesis of rocks from this suite. The petrography and mineral chemistry (presented in detail elsewhere) provide no evidence for magma mixing, but rather a simple multistage cooling process. Based on its lowest TiO2 content (0.88%), FeO*/MgO ratio (0.95 with total Fe as FeO), and Mg# (100 Mg/Mg + Fe" = 70), sample 483-17-2-(78-83) has been selected as the most primitive primary magma of the samples analyzed. This is supported by the REE data which show this sample has the lowest total REE content, a La/Sm_cn (chondrite-normalized) = 0.36, and Eu/Sm_cn = 1.05. Because other samples analyzed have higher SiO2, lower Mg#, and a negative Eu anomaly (Eu/Sm_cn as low as 0.89), they are most likely derivative magmas. Wright-Doherty and trace element modelling support fractional crystallization of 14.1% plagioclase (An88), 6.7% olivine (Fo86), and 4.7% clinopyroxene (Wo41En49Fs10) from 483-17-2-(78-83) to form the least differentiated sample with Mg# = 63. The La/Sm_cn of this derivative magma is almost identical to the parent magma (0.35 to 0.36), but the other samples have higher La/Sm_cn (0.45 to 0.51), more total REE, and lower Mg# (60 to 56). Both Wright-Doherty and trace element modelling indicate that the primary magma chosen cannot produce these more evolved samples. For the major elements, the TiO2 and P2O5 are too low in the calculated versus the observed (1.38 to 1.90; 0.11 to 0.17, respectively, for example). Rayleigh fractionation calculates a lower La/Sm_cn and requires about 60% crystal removal versus 40% for the Wright-Doherty. These more evolved samples must be derived from a parent magma different from the one selected here and, unfortunately, not sampled in this study. A magma formed by a smaller degree of partial melting with slightly more residual clinopyroxene left in the mantle than for sample 483-17-2-(78-83) is required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de esta tesis es el desarrollo de un sistema completo de navegación, aprendizaje y planificación para un robot móvil. Dentro de los innumerables problemas que este gran objetivo plantea, hemos dedicado especial atención al problema del conocimiento autónomo del mundo. Nuestra mayor preocupación ha sido la de establecer mecanismos que permitan, a partir de información sensorial cruda, el desarrollo incremental de un modelo topológico del entorno en el que se mueve el robot. Estos mecanismos se apoyan invariablemente en un nuevo concepto propuesto en esta tesis: el gradiente sensorial. El gradiente sensorial es un dispositivo matemático que funciona como un detector de sucesos interesantes para el sistema. Una vez detectado uno de estos sucesos, el robot puede identificar su situación en un mapa topológico y actuar en consecuencia. Hemos denominado a estas situaciones especiales lugares sensorialmente relevantes, ya que (a) captan la atención del sistema y (b) pueden ser identificadas utilizando la información sensorial. Para explotar convenientemente los modelos construidos, hemos desarrollado un algoritmo capaz de elaborar planes internalizados, estableciendo una red de sugerencias en los lugares sensorialmente relevantes, de modo que el robot encuentra en estos puntos una dirección recomendada de navegación. Finalmente, hemos implementado un sistema de navegación robusto con habilidades para interpretar y adecuar los planes internalizados a las circunstancias concretas del momento. Nuestro sistema de navegación está basado en la teoría de campos de potencial artificial, a la que hemos incorporado la posibilidad de añadir cargas ficticias como ayuda a la evitación de mínimos locales. Como aportación adicional de esta tesis al campo genérico de la ciencia cognitiva, todos estos elementos se integran en una arquitectura centrada en la memoria, lo que pretende resaltar la importancia de ésta en los procesos cognitivos de los seres vivos y aporta un giro conceptual al punto de vista tradicional, centrado en los procesos. The general objective of this thesis is the development of a global navigation system endowed with planning and learning features for a mobile robot. Within this general objective we have devoted a special effort to the autonomous learning problem. Our main concern has been to establish the necessary mechanisms for the incremental development of a topological model of the robot’s environment using the sensory information. These mechanisms are based on a new concept proposed in the thesis: the sensory gradient. The sensory gradient is a mathematical device which works like a detector of “interesting” environment’s events. Once a particular event has been detected the robot can identify its situation in the topological map and to react accordingly. We have called these special situations relevant sensory places because (a) they capture the system’s attention and (b) they can be identified using the sensory information. To conveniently exploit the built-in models we have developed an algorithm able to make internalized plans, establishing a suggestion network in the sensory relevant places in such way that the robot can find at those places a recommended navigation direction. It has been also developed a robust navigation system able to navigate by means of interpreting and adapting the internalized plans to the concrete circumstances at each instant, i.e. a reactive navigation system. This reactive system is based on the artificial potential field approach with the additional feature introduced in the thesis of what we call fictitious charges as an aid to avoid local minima. As a general contribution of the thesis to the cognitive science field all the above described elements are integrated in a memory-based architecture, emphasizing the important role played by the memory in the cognitive processes of living beings and giving a conceptual turn in the usual process-based approach.